I’m Ramin, currently working with the Autonomy perception team at Ford, developing perception solutions for autonomous driving. Prior to joining Ford, I was a Ph.D. student at the University of Tennessee Knoxville (UTK) in the Advanced Imaging and Collaborative Information Processing (AICIP) lab advised by Dr. Hairong Qi, where my research was focused on radar-camera sensor fusion for object detection and tracking in autonomous vehicles. During my time at UTK, I also led the Advanced Driver Assistance Systems (ADAS) and Connected and Automated Vehicles (CAV) teams in the EcoCAR 3 and EcoCAR Mobility Challenge competitions.
Ford Autonomous Vehicles (AV) & Mobility is dedicated to enhancing freedom of movement through a suite of mobility products and services, from micromobility to microtransit.
Blueberry Technology enhances user mobility by providing autonomous wheelchairs for indoor use.
August 2018 - Aug 2021, Knoxville, TN
The EcoCAR Mobility Challenge was the 12th U.S. Department of Energy (DOE) Advanced Vehicle Technology Competition (AVTC) series. The four-year competition challenged 11 university teams to apply advanced propulsion systems, as well as connected and automated vehicle technology to improve the energy efficiency, safety and consumer appeal of the 2019 Chevrolet Blazer.
August 2017 - May 2018, Knoxville, TN
EcoCAR 3 was the 11th U.S. Department of Energy (DOE) Advanced Vehicle Technology Competition (AVTC) series and is North America’s premier collegiate automotive engineering competition. The U.S. DOE and General Motors are challenged 16 North American universities to redesign a Chevrolet Camaro into a hybrid-electric car to reduce environmental impact, while maintaining the muscle and performance expected from this iconic American car.
Ph.D in Computer Engineering - Computer Vision | ||
MSc in Electrical Engineering - Telecommunication Systems | ||
2011 BSc in Electrical Engineering |
A center-based radar and camera fusion for 3D object detection in autonomous vehicles.
12th U.S. Department of Energy (DOE) Advanced Vehicle Technology Competition (AVTC) series, challenging 11 university teams to apply advanced propulsion systems, as well as connected and automated vehicle technology to improve the energy efficiency, safety and consumer appeal of the 2019 Chevrolet Blazer.
EcoCAR3 is challenging 16 North American university teams to redesign a 2016 Chevrolet Camaro. The ADAS team focuses on integrating the sensing system on the Camaro and deploy driver feedback to improve efficiency and safety.
The IARPA Functional Map of the World (fMoW) challenge focuses on promoting research in object identification and classification to automatically identify facility, building, and land use from satellite imagery. The dataset consists of 4- and 8-band multispectral images in 63 categories.
The Spacenet 3 challenge is focused on determining road networks and routing information directly from satellite imagery. The SpaceNet 3 Dataset contains ~8,000 km of roads across the four SpaceNet Areas of Interest.
In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection. Our approach, called CenterFusion, first uses a center point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object’s center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity.
In this work, we propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion. Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
In this paper we present a novel radar-camera sensor fusion framework for accurate object detection and distance estimation in autonomous driving scenarios. The proposed architecture uses a middle-fusion approach to fuse the radar point clouds and RGB images.
In this paper we introduce RRPN, a Radar-based real-time region proposal algorithm for object detection in autonomous driving vehicles. RRPN generates object proposals by mapping Radar detections to the image coordinate system and generating pre-defined anchor boxes for each mapped Radar detection point. These anchor boxes are then transformed and scaled based on the object’s distance from the vehicle, to provide more accurate proposals for the detected objects.