Researchers at NYU Tandon School of Engineering have unveiled an AI-powered system that can instantly analyze food photos and provide detailed nutritional information.
Imagine snapping a photo of your meal and instantly receiving its calorie count, fat content and nutritional breakdown. This futuristic vision is inching closer to reality thanks to innovative research from the NYU Tandon School of Engineering. A pioneering AI system developed by the university’s researchers promises to revolutionize how we manage our diets, weight, diabetes and other health conditions linked to nutrition.
The technology, outlined in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, leverages advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content. This includes key metrics such as calories, protein, carbohydrates and fat.
“Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable,” lead author Prabodh Panindre, an associate research professor in the Department of Mechanical Engineering at NYU Tandon, said in a news release. “Our system removes human error from the equation.”
For over a decade, NYU’s Fire Research Group, including Panindre and co-author Sunil Kumar, has been investigating critical firefighter health challenges. Studies reveal that a significant percentage of both career and volunteer firefighters are overweight or obese, leading to increased cardiovascular risks and operational challenges. These alarming statistics directly motivated the development of the AI-powered food-tracking system.
Creating reliable food recognition AI has been a challenging feat. Previous efforts floundered due to the immense visual diversity of food, among other issues.
“The sheer visual diversity of food is staggering,” added Kumar, a professor of mechanical engineering at NYU Abu Dhabi and global network professor of mechanical engineering at NYU Tandon. “Unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it.”
Another hurdle was accurately estimating portion sizes, which is crucial for nutritional assessments. The NYU team’s breakthrough involves a volumetric computation function that utilizes sophisticated image processing to measure the exact area occupied by each food item on a plate. This integration transforms 2D images into precise nutritional assessments without requiring manual input.
Efficient real-time processing has also been a challenge. Many previous models necessitated heavy computational power and cloud processing, which introduced delays and privacy concerns. The NYU researchers employed YOLOv8 with ONNX Runtime to develop a food-identification system that runs on a website, allowing users to easily access it via their phone’s web browser.
When tested on various foods, including a slice of pizza and the South Indian dish idli sambhar, the system provided nutritional values closely aligning with standard references.
“One of our goals was to ensure the system works across diverse cuisines and food presentations,” Panindre added. “We wanted it to be as accurate with a hot dog — 280 calories according to our system — as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat.”
The researchers optimized their data by combining similar food categories and focusing on underrepresented food types, refining their dataset to 95,000 images across 214 food categories. The system achieved a mean Average Precision (mAP) score of 0.7941 at an Intersection over Union (IoU) threshold of 0.5, meaning it could accurately identify food items about 80% of the time, even when partially obscured.
Currently available as a web application, the system is described as a “proof-of-concept” that may soon be adapted for broader health care applications.
Source: NYU Tandon School of Engineering