I looked into laser scanning cameras a number of years back when point cloud generating technology first came out. Basically they bounced a bunch of laser pulses and recorded the time it took to bounce or hit or return, etc. and that defined the topography of an object. Its gotten better over the years to where now it can use a phone camera, however...
The point cloud it generates can be very hard to work with and where the process left people behind and thus it did not take off. I have seen a few things in SL and other VR spaces that used obj. files generates by this process and they were generally very high polygon and somewhat random in layout, so just removing more polys or points didnt make it better.
One of the advantages of the refinements being made over time is that the cameras could capture geometry information as well as lighting and color information, so it could generate the textures for you as well, very accurately.
I would like to see some results from the phone app, may mess around with it to see what my limited LG Stylo 6 can do with it, particularly since it automatically generates an obj. or dae. file. If it can do that well, someone using retopology software post-process could possible make that work. I doubt if anything you get out of that will be game-ready.
Since this technology first came out, regardless of further refinements and development, the work produced by all the attempts I have seen seem to look very similar, like if a plastic sheet was draped over everything, caused by gaps in the data cloud being crossed with black spaces. The holes have to be closed and data filled in, which was the hard part.
Help