Research_Report_Cover_banner.png

Enhancing Surveillance Techniques

Comparison of real and synthetic intersection

A research group led by Brian Rigling at Wright State University and Todd Rovito at the Air Force Research Lab developed a simple synthetic intersection and synthetic straight test track, based upon Sadr City, Iraq. The comparison view is a satellite image of the actual intersection.

Object recognition is an important problem that has many applications that are of interest to the Air Force. Object recognition is a key enabler to autonomous exploitation of intelligence, surveillance and reconnaissance (ISR) data, which can make the automatic searching of millions of hours of video practical. Despite decades of research into the problem of object recognition, success in all operating conditions has been elusive. 

Recently success in pattern recognition has been demonstrated by algorithms that make use of large amounts of training data, such as the algorithms employed by IBM’s Watson supercomputer, which defeated the world Jeopardy champion. Watson was preloaded with multiple terabytes of digital books that were indexed for quick retrieval. 

Brian Rigling, Ph.D., is a member of a Wright State University research team that is working with colleagues at the Air Force Research Laboratory (AFRL) to leverage the technique in applying electro-optical exploitation to object recognition. This approach requires making large sets of electro-optical training data available to researchers. 

“Computer-modeled scenes are described by five pieces of information: scene object geometry, camera viewpoint, textures, lighting and shading,” explained Rigling. “Our goal was to create predictable, realistic-looking imagery, so we selected LuxRender, a free, unbiased physics-based renderer that can solve the rendering equation for general lighting.” 

However, this setup does not account for all lighting phenomena and camera viewpoints, so the team created a rendering pipeline that moves the simulated camera every 3 degrees over a hemisphere and used 17 lighting placements. Each frame of data takes three minutes to process and each vehicle has 62,000 frames; on a single computer a single vehicle will take 92 days to compute. The team’s rendering pipeline can be run simultaneously; by leveraging computational resources at the Ohio Supercomputer Center the team could render 2,048 frames at the same time reducing the run time to a few hours. 

“We used OSC’s Oakley cluster to quickly render large scenes with 100 million pixels that covered land segments of 28 KM by 20 KM,” said Todd Rovito, a research computer scientist at AFRL. “We have already generated twenty minutes of large-scale synthetic data and ten complete vehicle domes for a total of 620,000 frames. We plan to release all the generated data to the scientific research community via AFRL’s Sensor Data Management System.

Project Lead: Brian Rigling, Wright State University

Research Title: Fiorano: Detection test bed and modeling

Funding Source: Air Force Research Laboratory

Website: https://www.sdms.afrl.af.mil/index.php?collection=eo_synthetic_dat