Preview

Civil Aviation High Technologies

Advanced search

Visual coherence in augmented reality training systems considering aerospace specific features

https://doi.org/10.26467/2079-0619-2023-26-5-30-41

Abstract

In May 2022, Saudi Arabian Military Industries, a Saudi government agency, acquired an augmented reality training platform for pilots. In September, the Boeing Corporation began the development of an augmented reality pilot simulator. In November, a similar project was launched by BAE Systems, a leading British developer of aeronautical engineering. These facts allow us to confidently speak about the beginning of a new era of aviation simulators – simulators using the augmented reality technology. One of the promising advantages of this technology is the ability to safely simulate dangerous situations in the real world. A necessary condition for using this advantage is to ensure the visual coherence of augmented reality scenes: virtual objects must be indistinguishable from real ones. All the global IT leaders consider augmented reality as the subsequent surge of radical changes in digital electronics, so visual coherence is becoming a key issue for the future of IT, and in aerospace applications, visual coherence has already acquired practical significance. The Russian Federation lags far behind in studying the problems of visual coherence in general and for augmented reality flight simulators in particular: at the time of publication the authors managed to find only two papers on the subject in the Russian research space, while abroad their number is already approximately a thousand. The purpose of this review article is to create conditions for solving the problem. Visual coherence depends on many factors: lighting, color tone, shadows from virtual objects on real ones, mutual reflections, textures of virtual surfaces, optical aberrations, convergence and accommodation, etc. The article reviews the publications devoted to methods for assessing the conditions of illumination and color tone of a real scene and transferring them to virtual objects using various probes and by individual images, as well as by rendering virtual objects in augmented reality scenes, using neural networks.

About the Authors

A. L. Gorbunov
Moscow State Technical University of Civil Aviation
Russian Federation

Andrey L. Gorbunov, Candidate of Technical Sciences, Associate Professor, Associate Professor of the Air Traffic Management Chair

Moscow



Yu. Li
Moscow State Technical University of Civil Aviation
Russian Federation

Yunhan Li, Postgraduate Student

Moscow



References

1. Gorbunov, А.L. (2023). Visual coherence for augmented reality. Advanced Engineering Research, no. 2, pp. 180–190. DOI: 10.23947/2687-1653-2023-23-2-180-190 (in Russian)

2. Gorbunov, А.L. (2017). Visual homogeneity of augmented reality scenes. In: Zapis i vosproizvedeniye obyemnykh izobrazheniy v kinematografe i drugikh oblastyakh: sbornik dokladov IX Mezhdunarodnoy nauchno-prakticheskoy konferentsii. Moscow: VGIK im. S.A. Gerasimova, pp. 235–239. (in Russian)

3. Hughes, C., Fidopiastis, C., Stanney, K., Bailey, P., Ruiz, E. (2020). The psychometrics of cybersickness in augmented reality. Frontiers in Virtual Reality, vol. 1, ID: 602954. DOI: 10.3389/frvir.2020.602954 (accessed: 05.04.2023).

4. Somanath, G., Kurz, D. (2021). HDR environment map estimation for real-time augmented reality. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 11293–11301. DOI: 10.1109/CVPR46437.2021.01114 (accessed: 05.04.2023).

5. Zollmann, S., Langlotz, T., Grasset, R., Lo, W.H., Mori, S., Regenbrecht, H. (2021). Visualization techniques in augmented reality: A taxonomy, methods and patterns. In: IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 9, pp. 3808–3825. DOI: 10.1109/TVCG.2020.2986247

6. Kronander, J., Banterle, F., Gardner, A., Miandji, E., Unger, J. (2015). Photorealistic rendering of mixed reality scenes. Computer Graphics Forum, vol. 34, issue 2, pp. 643–665. DOI: 10.1111/cgf.12591

7. Debevec, P., Graham, P., Busch, J., Bolas, M. (2012). A single-shot light probe. In SIGGRAPH '12: Special Interest Group on Computer Graphics and Interactive Techniques Conference, Los Angeles California, Article No.: 10, pp. 1–19. DOI: 10.1145/2343045.2343058 (accessed: 05.04.2023).

8. Unger, J., Wenger, A., Hawkins, T., Gardner, A., Debevec, P. (2003). Capturing and rendering with incident light fields. In: 14th Eurographics Symposium on Rendering, pp. 141–149. DOI: 10.2312/EGWR/EGWR03/141-149

9. Alhakamy, A., Tuceryan, M. (2019). CubeMap360: Interactive global illumination for augmented reality in dynamic environment. In: Proceedings of IEEE SoutheastCon, Huntsville, AL, USA, pp. 1–8. DOI: 10.1109/SoutheastCon42311.2019.9020588

10. Knorr, S., Kurz, D. (2014). Real-time illumination estimation from faces for coherent rendering. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, pp. 349–350. DOI: 10.1109/ISMAR.2014.6948483

11. Karsch, K., Sunkavalli, K., Hadap, S., et al. (2014). Automatic scene inference for 3D object compositing. ACM Transactions on Graphics, vol. 33, no. 3, pp. 1–15. DOI: 10.1145/2602146

12. Tsunezaki, S., Nomura, R., Komuro, T., Yamamoto, S., Tsumura, N. (2018). Reproducing material appearance of real objects using mobile augmented reality. In: 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, pp. 196–197. DOI: 10.1109/ISMAR-Adjunct.2018.00065

13. Reinhard, E., Akyuz, A.O., Colbert, M., Hughes, C., O’Сonnor, M. (2004). Real-time color blending of rendered and captured video. In: Proceedings Interservice/Industry Training, Simulation and Education Conference (I/ITSEC), Orlando, Amerika Birleşik Devletleri, p. 15021. Available at: http://www.ceng.metu.edu.tr/~akyuz/files/blend.pdf (accessed: 05.04.2023).

14. Chen, W.-S., Huang, M.-L., Wang, C.-M. (2016). Optimizing color transfer using color similarity measurement. In: 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pp. 1–6. DOI: 10.1109/ICIS.2016.7550799

15. Chang, Y., Saito, S., Nakajima, M. (2007). Example-Based color transformation of image and video using basic color categories. In: IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 329–336. DOI: 10.1109/tip.2006.888347

16. Xiao, X., Ma, L. (2009). Gradient Preserving color transfer. Computer Graphics Forum, vol. 28, issue 7, pp. 1879–1886. DOI: 10.1111/j.1467-8659.2009.01566.x

17. Oskam, T., Hornung, A., Sumner, R.W., Gross, M. (2012). Fast and stable color balancing for images and augmented reality. In: 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (3DIMPVT), Zurich, Switzerland, October 13–15, pp. 49–56. DOI: 10.1109/3DIMPVT.2012.36

18. Knecht, M., Traxler, C., Purgathofer, W., Wimmer, M. (2011). Adaptive CameraBased Color Mapping For Mixed-Reality Applications. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2011), Basel, Switzerland, pp. 165–168. DOI: 10.1109/ISMAR.2011.6092382

19. Einabadi, F., Guillemaut, J., Hilton, A. (2021). Deep neural models for illumination estimation and relighting: A survey. Computer Graphics Forum, vol. 40, issue 6, pp. 315–331. DOI: 10.1111/cgf.14283

20. Gardner, M., Sunkavalli, K., Yumer, E., et al. (2017). Learning to predict indoor illumination from a single image. ACM Transactions on Graphics, vol. 36, issue 6, Article No.: 176, pp. 1–14. DOI: 10.1145/3130800.3130891

21. Song, S., Funkhouser, T. (2019). Neural illumination: Lighting prediction for indoor environments. In: Proceedings 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6918–6926. DOI: 10.48550/arXiv.1906.07370 (accessed: 05.04.2023).

22. Cheng, D., Shi, J., Chen, Y., Deng, X., Zhang, X. (2018). Learning scene illumination by pairwise photos from rear and front mobile cameras. Computer Graphics Forum, vol. 37, issue 7, pp. 213–221. DOI: 10.1111/cgf.13561

23. Hold-Geoffroy, Y., Sunkavalli, K., Hadap, S., Gambaretto, E., Lalonde, J. (2017). Deep outdoor illumination estimation. In: Proceedings 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 2373–2382. DOI: 10.1109/CVPR.2017.255 (accessed: 05.04.2023).

24. Zhao, Y., Guo, T. (2020). Point AR: Effcient lighting estimation for mobile augmented reality. In: 16th European Conference on Computer Vision (ECCV'20), pp. 678–693. DOI: 10.48550/arXiv.2004.00006

25. Garon, M., Sunkavalli, K., Hadap, S., Carr, N., Lalonde, J. (2019). Fast spatiallyvarying in-door lighting estimation. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pp. 6908–6917. DOI: 10.48550/arXiv.1906.03799 (accessed: 05.04.2023).

26. LeGendre, C., Ma, W., Fyffe, G., et al. (2019). DeepLight: Learning illumination for unconstrained mobile mixed reality. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pp. 5918–5928. DOI: 10.48550/arXiv.1904.01175 (accessed: 05.04.2023).

27. Srinivasan, P., Mildenhall, B., Tan‐ cik, M., Barron, J., Tucker, R., Snavely, N. (2020). Lighthouse: Predicting lighting volumes for spatially-coherent illumination. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pp. 8080–8089. DOI: 10.48550/arXiv.2003.08367 (accessed: 05.04.2023).

28. Tewari, A., Fried, O., Thies, J., et al. (2020). State of the art on neural rendering. Computer Graphics Forum, vol. 39, issue 2, pp. 701–727. DOI: 10.1111/cgf.14022

29. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. (2014). Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, vol. 2, pp. 2672–2680. DOI: 10.5555/2969033.2969125

30. Karras, T., Laine, S., Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 4396–4405. DOI: 10.1109/CVPR.2019.00453 (accessed: 05.04.2023).

31. Bi, S., Sunkavalli, K., Perazzi, F., Shechtman, E., Kim, V., Ramamoorthi, R. (2019). Deep CG2Real: Synthetic-to-real translation via image disentanglement. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2730–2739. DOI: 10.1109/ICCV.2019.00282

32. Xu, Z., Bi, S., Sunkavalli, K., Hadap, S., Su, H., Ramamoorthi, R. (2019). Deep view synthesis from sparse photometric images. ACM Transactions on Graphics, vol. 38, issue 412, Article No.: 76, pp. 1–13. DOI: 10.1145/3306346.3323007 (accessed: 05.04.2023).

33. Park, T., Liu, M., Wang, T., Zhu, J. (2019). Semantic image synthesis with spatiallyadaptive normalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 19 p. DOI: 10.48550/arXiv.1903.07291 (accessed: 05.04.2023).

34. Li, Z., Shafiei, M., Ramamoorthi, R., Sunkavalli, K., Chandraker, M. (2020). Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and SVBRDF from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2475–2484. DOI: 10.48550/arXiv.1905.02722 (accessed: 05.04.2023).

35. Zhan, F., Yu, Y., Wu, R., et al. (2022). Bi-level feature alignment for semantic image translation and manipulation. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 18 p. DOI: 10.48550/arXiv.2107.03021 (accessed: 05.04.2023).

36. Ghasemi, Y., Jeong, H., Choi, S., Lee, J., Park, K. (2022). Deep learning-based object detection in augmented reality: A systematic review. Computers in Industry, vol. 139, ID: 103661. DOI: 10.1016/j.compind.2022.103661 (accessed: 05.04.2023).


Review

For citations:


Gorbunov A.L., Li Yu. Visual coherence in augmented reality training systems considering aerospace specific features. Civil Aviation High Technologies. 2023;26(5):30-41. (In Russ.) https://doi.org/10.26467/2079-0619-2023-26-5-30-41

Views: 365


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-0619 (Print)
ISSN 2542-0119 (Online)