ROS NeuralRecon

Check out the ros_neuralrecon repo here! About a month ago, we remarked on some exciting modifications to NeuralRecon, which can generate dense scene reconstructions in real-time using TSDF fusion on posed monocular video. Specifically, we noted that the image feature extraction could be shifted to the edge using depthai cameras after converting the MnasMulti backbone to .blob. Trained on ScanNet, the researchers recommend custom data capture with ARKit using ios_logger. We found this works well if you have Xcode on a Mac and an iphone....

 · 4 min · Terry Rodriguez & Salma Mayorquin

Go Nerf Yourself

While prototyping YogAI, our smart mirror fitness application, we dreamed of using generative models like GANs to render realistic avatars. For the TFWorld 2.0 Challenge, we came a bit closer to that vision by demonstrating a pipeline which quickly creates motion transfer videos. More recently, we have been learning about reconstruction techniques and have been excited about the work around Neural Radiance Fields (Nerf). By this method, one learns an implicit representation of a scene from posed monocular videos....

 · 2 min · Terry Rodriguez & Salma Mayorquin

Real-Time Reconstructions

Modern archeologists have been able to survey larger structures more precisely using remote sensing and photogrammetry. More recently, researchers demonstrate applications of multi view stereo with networked embedded cameras to track geological disturbances. In scenarios where visibility comes with high cost or saftey risk, the ability to quickly render high-fidelity reconstructions for offline analysis & review can be a powerful tool. Advances in techniques like multi-view stereo and structure from motion have reduced the cost by alleviating dependence on more expensive sensors like lidar....

 · 4 min · Terry Rodriguez & Salma Mayorquin