混合现实中的上下文信息构建与应用进展综述
Literature review of contextual information construction and applications in mixed reality
- 2024年29卷第10期 页码:2937-2954
纸质出版日期: 2024-10-16
DOI: 10.11834/jig.230750
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-10-16 ,
移动端阅览
杨浩中, 舒文桐, 汪淼. 2024. 混合现实中的上下文信息构建与应用进展综述. 中国图象图形学报, 29(10):2937-2954
Yang Haozhong, Shu Wentong, Wang Miao. 2024. Literature review of contextual information construction and applications in mixed reality. Journal of Image and Graphics, 29(10):2937-2954
随着信息技术的发展,混合现实(mixed reality, MR)技术已应用于医疗、教育和辅助引导等众多领域。MR场景包含丰富的语义信息,基于场景上下文信息的混合现实技术可以改善用户对场景的感知,优化用户的交互操作、提升交互模型的准确度,受到广泛关注。然而,目前在该领域没有针对上下文信息进行调查的综述类文献,缺乏梳理与分类。本文的研究对象是使用上下文信息的MR技术与系统。通过对MR领域的文献调研,本文提出了3个研究问题,并对国外近20年的33篇实证研究论文进行分析,概述了使用上下文信息的MR技术的最新发展,从3个维度出发进行分类学研究并分别提出分类标准,如上下文信息种类、上下文知识库的构建方式和应用领域等。其中,上下文信息的种类可以分为场景语义、对象语义、空间关系、群组关系、从属关系和运动信息6类,知识库的构建按用户介入角度和基础技术类型划分,应用领域从场景类型和生成式特性出发进行分类。通过对不同维度对研究对象的分类,本文对提出的研究问题进行了回应,并总结了现阶段的不足以及未来可能的研究方向。本综述可以辅助不同领域的研究人员对上下文信息的设计、选择和评估,从而推动未来混合现实应用技术与系统的研发。
With the development of information technology, mixed reality (MR) technology has been applied in various fields, such as healthcare, education, and assisted guidance. MR scenes contain rich semantic information, and MR technology based on scene context information can improve users’ perception of the scene, optimize user interaction, and enhance the accuracy of interaction models. Therefore, they have quickly gained widespread attention. However, literature reviews specifically investigating context information in this field are limited, and organization and classification are lacking. This paper focuses on MR technology and systems that utilize context information. This study was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. First, keywords for the search were determined on the basis of three factors: research domain, study subjects, and research scenarios. Subsequently, searches were performed in two influential databases in the field of MR: ACM Digital Library and IEEE Xplore. A preliminary screening was then executed, considering the types of journals and conferences to eliminate irrelevant and unpublished literature. Subsequently, the titles and abstracts of the articles were reviewed sequentially, eliminating duplicates and irrelevant results. Finally, a total of 210 articles were individually screened to select 29 papers for the review. Additionally, four more articles were included on the basis of expertise, resulting in a total of 33 articles for the review. Through a comprehensive literature review of MR databases, three research questions were formulated, and a dataset of research articles was established. The three research questions addressed in this paper are as follows: 1) What are the different types of scene context? 2) How is scene context organized in various MR technologies and systems? 3) What are the application areas of empirical research? On the basis of the evolution of scene context and the refinement of MR technologies and systems, we analyze the empirical research papers spanning nearly 20 years. This analysis involves summarizing previous research and providing an overview of the latest developments in systems that leverage scene context. We also propose potential classification criteria, such as types of scene context, construction methods of knowledge bases for contextual information, fundamental technologies, and application domains. Among the various types of scene context, we categorize them into six classes: scene semantics, object semantics, spatial relationships, group relationships, dependence relationships, and motion relationships. Scene semantics is the semantic information encompassed by various elements in the scene environment, including objects, characters, and texture information. In the categorization of object semantics, we consider information about the individual object itself, such as user information, type, attributes, and special content. Spatial relationship refers to numerical information, such as the relative position, angle, or arrangement between various objects in the scene. We analyzed spatial relationships in three ways: base spatial relationships, microscene spatial information, and real-scene spatial information. We consider a certain number of closely neighboring objects of the same category as a group. Group relations focus on information about the overall perspective such as intergroup relations and the number of groups. Dependence relationship is concerned with the dependencies and affiliations that may exist between different objects in the scene at the functional and physical levels. Motion information is a new type of scene context, including basic motion information and special motion information, which describes the dynamic information of scene objects. Through an analysis of the utilization of various types of scene context, we establish the relationship between research objectives and contextual information, providing guidance on the selection of contextual information. The construction of knowledge bases is examined from user-intervention perspectives and types of fundamental technologies. Knowledge bases established with user intervention typically rely on researchers’ abstract analysis of scene objects rather than pre-existing databases. Conversely, knowledge bases built without user intervention rely on existing information, such as low-level raw data in databases or predefined scenarios. The underlying technologies in this context are categorized into virtual reality (VR) and augmented reality (AR). Conducting classification research from the dual perspectives of user intervention and fundamental technology facilitates a deeper understanding of how contextual information is organized in various MR systems. Application areas are investigated on the basis of the types of scenarios and whether they involve generative processes or not. The types of application scenarios are then categorized into six types: auxiliary guidance, AR annotation, scene reconstruction, medical treatment, object manipulation, and general purpose. Generative models can automatically generate target information, such as AR-annotated shadows based on the scene, whereas nongenerative models mainly focus on specific operations. Through analysis from these two perspectives, the advantages and disadvantages of MR systems and technologies in different application scenarios can be explored. Drawing upon the exploration and research in these three dimensions, we investigate the challenges associated with selecting, acquiring, and applying contextual information in MR scenarios. By classifying the research objects from different dimensions, we address the research questions and identify current shortcomings and future research directions. The aim of this review is to support researchers across diverse fields in designing, selecting, and evaluating scene context, ultimately fostering the advancement of future MR application technologies and systems.
虚拟现实(VR)增强现实(AR)感知与交互上下文信息场景语义
virtual reality(VR)augmented reality(AR)perception and interactioncontext informationscene semantics
Acquisti A, Gross R and Stutzman F. 2014. Face recognition and privacy in the age of augmented reality. Journal of Privacy and Confidentiality, 6(2): 1-20 [DOI: 10.29012/jpc.v6i2.638http://dx.doi.org/10.29012/jpc.v6i2.638]
Anthes C, García-Hernndez R J, Wiedemann M and Kranzlmüller D. 2016. State of the art of virtual reality technology//Proceedings of 2016 IEEE Aerospace Conference. Big Sky, USA: IEEE: 1-19 [DOI: 10.1109/AERO.2016.7500674http://dx.doi.org/10.1109/AERO.2016.7500674]
Barreira J, Bessa M, Barbosa L and Magalhães L. 2018. A context-aware method for authentically simulating outdoors shadows for mobile augmented reality. IEEE Transactions on Visualization and Computer Graphics, 24(3): 1223-1231 [DOI: 0.1109/TVCG.2017.2676777http://dx.doi.org/0.1109/TVCG.2017.2676777]
Bekele M K, Pierdicca R, Frontoni E, Malinverni E S and Gain J. 2018. A survey of augmented, virtual, and mixed reality for cultural heritage. Journal on Computing and Cultural Heritage, 11(2): #7 [DOI: 10.1145/3145534http://dx.doi.org/10.1145/3145534]
Bichlmeier C, Wimmer F, Heining S M and Navab N. 2007. Contextual anatomic mimesis hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara, Japan: IEEE: 129-138 [DOI: 10.1109/ISMAR.2007.4538837http://dx.doi.org/10.1109/ISMAR.2007.4538837]
Bukowski R W and Séquin C H. 1995. Object associations: a simple and practical approach to virtual 3D manipulation//Proceedings of 1995 Symposium on Interactive 3D Graphics. Monterey, USA: ACM: 131-ff [DOI: 10.1145/199404.199427http://dx.doi.org/10.1145/199404.199427]
Campbell A G, Santiago K, Hoo D and Mangina E. 2016. Future mixed reality educational spaces//Proceedings of 2016 Future Technologies Conference (FTC). San Francisco, USA: IEEE: 1088-1093 [DOI: 10.1109/FTC.2016.7821738http://dx.doi.org/10.1109/FTC.2016.7821738]
Cashion J, Wingrave C and LaViola J J. 2013. Optimal 3D selection technique assignment using real-time contextual analysis//2013 IEEE Symposium on 3D User Interfaces (3DUI). Orlando, USA: IEEE: 107-110 [DOI: 10.1109/3DUI.2013.6550205http://dx.doi.org/10.1109/3DUI.2013.6550205]
Chen K, Lai Y K, Wu Y X, Martin R and Hu S M. 2014. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information. ACM Transactions on Graphics, 33(6): #208 [DOI: 10.1145/2661229.2661239http://dx.doi.org/10.1145/2661229.2661239]
Chen L, Tang W, John N, Wan T R and Zhang J J. 2018. Context-aware mixed reality: a framework for ubiquitous interaction [EB/OL]. [2023-10-17]. https://arxiv.org/pdf/1803.05541.pdfhttps://arxiv.org/pdf/1803.05541.pdf
Costanza E, Kunz A and Fjeld M. 2009. Mixed reality: a survey//Lalanne D and Kohlas J, eds. Human Machine Interaction. Berlin, Heidelberg: Springer: 47-68 [DOI: 10.1007/978-3-642-00437-7_3http://dx.doi.org/10.1007/978-3-642-00437-7_3]
De Guzman J A, Thilakarathna K and Seneviratne A. 2019. Security and privacy approaches in mixed reality: a literature survey. ACM Computing Surveys, 52(6): #110 [DOI: 10.1145/3359626http://dx.doi.org/10.1145/3359626]
Dehmeshki H and Stuerzlinger W. 2008. Intelligent mouse-based object group selection//9th International Symposium on Smart Graphics. Rennes, France: Springer: 33-44 [DOI: 10.1007/978-3-540-85412-8_4http://dx.doi.org/10.1007/978-3-540-85412-8_4]
Ding W Y and Shan M X. 2023. A systematic literature review of affective experience in virtual reality learning environment. Software Guide, 22(9): 32-44
丁文郁, 单美贤. 2023. 虚拟现实学习环境情感体验研究系统文献综述. 软件导刊, 22(9): 32-44 [DOI: 10.11907/rjdk.222144http://dx.doi.org/10.11907/rjdk.222144]
ElSayed N A M, Smith R T and Thomas B H. 2016. Horus eye: see the invisible bird and snake vision for augmented reality information visualization//2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). Merida, Mexico: IEEE: 203-208 [DOI: 10.1109/ISMAR-Adjunct.2016.0077http://dx.doi.org/10.1109/ISMAR-Adjunct.2016.0077]
Ennis C, Peters C and O’Sullivan C. 2008. Perceptual evaluation of position and orientation context rules for pedestrian formations//Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization. Los Angeles, California,USA: ACM, 75-82 [DOI: 10.1145/1394281.1394295http://dx.doi.org/10.1145/1394281.1394295]
Fidalgo C G, Yan Y K, Cho H, Sousa M, Lindlbauer D and Jorge J. 2023. A survey on remote assistance and training in mixed reality environments. IEEE Transactions on Visualization and Computer Graphics, 29(5): 2291-2303 [DOI: 10.1109/TVCG.2023.3247081http://dx.doi.org/10.1109/TVCG.2023.3247081]
Fisher M and Hanrahan P. 2010. Context-based search for 3D models//ACM SIGGRAPH Asia 2010 Papers. Seoul, Korea(South): ACM: #182 [DOI: 10.1145/1866158.1866204http://dx.doi.org/10.1145/1866158.1866204]
Fisher M, Ritchie D, Savva M, Funkhouser T and Hanrahan P. 2012. Example-based synthesis of 3D object arrangements. ACM Transactions on Graphics, 31(6): #135 [DOI: 10.1145/2366145.2366154http://dx.doi.org/10.1145/2366145.2366154]
Fisher M, Savva M and Hanrahan P. 2011. Characterizing structural relationships in scenes using graph kernels//ACM SIGGRAPH 2011 Papers. Vancouver, Canada: ACM: #34 [DOI: 10.1145/1964921.1964929http://dx.doi.org/10.1145/1964921.1964929]
Fisher M, Savva M, Li Y Y, Hanrahan P and Nießner M. 2015. Activity-centric scene synthesis for functional 3D scene modeling. ACM Transactions on Graphics, 34(6): #179 [DOI: 10.1145/2816795.2818057http://dx.doi.org/10.1145/2816795.2818057]
Fu Q, Chen X W, Wang X T, Wen S J, Zhou B and Fu H B. 2017. Adaptive synthesis of indoor scenes via activity-associated object relation graphs. ACM Transactions on Graphics, 36(6): #201 [DOI: 10.1145/3130800.3130805http://dx.doi.org/10.1145/3130800.3130805]
Fujita K, Itoh Y, Takashima K, Nakajima K, Hayashi Y and Kishino F. 2013. Ambient party room: a room-shaped system enhancing communication for parties or gatherings//2013 IEEE Virtual Reality (VR). Lake Buena Vista, USA: IEEE: 1-4 [DOI: 10.1109/VR.2013.6549436http://dx.doi.org/10.1109/VR.2013.6549436]
Gösele M and Stuerzlinger W. 1999. Semantic constraints for scene manipulation//Proceedings of 1999 Spring Conference on Computer Graphics. Comenius University: 140-146
Gottschalk S A. 2000. Collision Queries Using Oriented Bounding Boxes. North Carolina, USA: The University of North Carolina at Chapel Hill
Gras G and Yang G Z. 2019. Context-aware modeling for augmented reality display behaviour. IEEE Robotics and Automation Letters, 4(2): 562-569 [DOI: 10.1109/LRA.2019.2890852http://dx.doi.org/10.1109/LRA.2019.2890852]
Haller M, Drab S and Hartmann W. 2003. A real-time shadow approach for an augmented reality application using shadow volumes//Proceedings of the ACM Symposium on Virtual Reality Software and Technology. Osaka, Japan: ACM: 56-65 [DOI: 10.1145/1008653.1008665http://dx.doi.org/10.1145/1008653.1008665]
He K M, Gkioxari G, Dollr P and Girshick R. 2017. Mask R-CNN//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 2980-2988 [DOI: 10.1109/ICCV.2017.322http://dx.doi.org/10.1109/ICCV.2017.322]
He Y, Liu Y T, Jin Y H, Zhang S H, Lai Y K and Hu S M. 2022. Context-consistent generation of indoor virtual environments based on geometry constraints. IEEE Transactions on Visualization and Computer Graphics, 28(12): 3986-3999 [DOI: 10.1109/TVCG.2021.3111729http://dx.doi.org/10.1109/TVCG.2021.3111729]
Irawati S, Green S, Billinghurst M, Duenser A and Ko H. 2006. “Move the couch where?”: developing an augmented reality multimodal interface//2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. Santa Barbara, USA: IEEE: 183-186 [DOI: 10.1109/ISMAR.2006.297812http://dx.doi.org/10.1109/ISMAR.2006.297812]
Jacobs K and Loscos C. 2006. Classification of illumination methods for mixed reality. Computer Graphics Forum, 25(1): 29-51 [DOI: 10.1111/j.1467-8659.2006.00816.xhttp://dx.doi.org/10.1111/j.1467-8659.2006.00816.x]
Joseph S L, Zhang X C, Dryanovski I, Xiao J Z, Yi C C and Tian Y L. 2013. Semantic indoor navigation with a blind-user oriented augmented reality//Proceedings of 2013 IEEE International Conference on Systems, Man, and Cybernetics. Manchester, UK: IEEE: 3585-3591 [DOI: 10.1109/SMC.2013.611http://dx.doi.org/10.1109/SMC.2013.611]
Kalkofen D, Mendez E and Schmalstieg D. 2009. Comprehensible visualization for augmented reality. IEEE Transactions on Visualization and Computer Graphics, 15(2): 193-204 [DOI: 10.1109/TVCG.2008.96http://dx.doi.org/10.1109/TVCG.2008.96]
Kang Z Z, Yang J T, Yang Z and Cheng S. 2020. A review of techniques for 3D reconstruction of indoor environments. ISPRS International Journal of Geo-Information, 9(5): #330 [DOI: 10.3390/ijgi9050330http://dx.doi.org/10.3390/ijgi9050330]
Köppel T, Gröller M E and Wu H Y. 2021. Context-responsive labeling in augmented reality//2021 IEEE 14th Pacific Visualization Symposium (PacificVis). Tianjin, China: IEEE: 91-100 [DOI: 10.1109/PacificVis52677.2021.00020http://dx.doi.org/10.1109/PacificVis52677.2021.00020]
Koulieris G A, Drettakis G, Cunningham D and Mania K. 2014. An automated high-level saliency predictor for smart game balancing. ACM Transactions on Applied Perception, 11(4): #17 [DOI: 10.1145/2637479http://dx.doi.org/10.1145/2637479]
Lang Y N, Liang W and Yu L F. 2019. Virtual agent positioning driven by scene semantics in mixed reality//Proceedings of 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Osaka, Japan: IEEE: 767-775 [DOI: 10.1109/VR.2019.8798018http://dx.doi.org/10.1109/VR.2019.8798018]
Li C Y, Li W W, Huang H K and Yu L F. 2022. Interactive augmented reality storytelling guided by scene semantics. ACM Transactions on Graphics, 41(4): #91 [DOI: 10.1145/3528223.3530061http://dx.doi.org/10.1145/3528223.3530061]
Li R Y, Han B X, Li H W, Ma L F, Zhang X R, Zhao Z and Liao H. 2023. A comparative evaluation of optical see-through augmented reality in surgical guidance. IEEE Transactions on Visualization and Computer Graphics, 30(7): 4362-4374 [DOI: 10.1109/TVCG.2023.3260001http://dx.doi.org/10.1109/TVCG.2023.3260001]
Li S Z. 1994. Markov random field models in computer vision//Proceedings of the 3rd European Conference on Computer Vision. Stockholm, Sweden: Springer: 361-370 [DOI: 10.1007/BFb0028368http://dx.doi.org/10.1007/BFb0028368]
Loscos C, Drettakis G and Robert L. 2000. Interactive virtual relighting of real scenes. IEEE Transactions on Visualization and Computer Graphics, 6(4): 289-305 [DOI: 10.1109/2945.895874http://dx.doi.org/10.1109/2945.895874]
Ma R, Patil A G, Fisher M, Li M Y, Pirk S, Hua B S, Yeung S K, Tong X, Guibas L and Zhang H. 2018. Language-driven synthesis of 3D scenes from scene databases. ACM Transactions on Graphics, 37(6): #212 [DOI: 10.1145/3272127.3275035http://dx.doi.org/10.1145/3272127.3275035]
Merrell P, Schkufza E, Li Z Y, Agrawala M and Koltun V. 2011. Interactive furniture layout using interior design guidelines. ACM Transactions on Graphics, 30(4): #87 [DOI: 10.1145/2010324.1964982http://dx.doi.org/10.1145/2010324.1964982]
Milgram P and Kishino F. 1994. A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, E77-D(12): 1321-1329
Milgram P, Takemura H, Utsumi A and Kishino F. 1995. Augmented reality: a class of displays on the reality-virtuality continuum//Proceedings Volume 2351, Telemanipulator and Telepresence Technologies. Boston, United States: SPIE: 282-292 [DOI: 10.1117/12.197321http://dx.doi.org/10.1117/12.197321]
Miller G A. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11): 39-41 [DOI: 10.1145/219717.219748http://dx.doi.org/10.1145/219717.219748]
Moro C, Štromberga Z, Raikos A and Stirling A. 2017. The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anatomical Sciences Education, 10(6): 549-559 [DOI: 10.1002/ase.1696http://dx.doi.org/10.1002/ase.1696].
Murphy K M, Cash J and Kellinger J J. 2018. Learning with avatars: Exploring mixed reality simulations for next-generation teaching and learning//Handbook of Research on Pedagogical Models for Next-Generation Teaching and Learning. Hershey, USA: IGI Global: 1-20 [DOI: 10.4018/978-1-5225-3873-8.ch001http://dx.doi.org/10.4018/978-1-5225-3873-8.ch001]
Nifakos S and Zary N. 2014. Virtual patients in a real clinical context using augmented reality: impact on antibiotics prescription behaviors//e-Health—For Continuity of Care. [s.l.]: IOS Press: 707-711 [DOI: 10.3233/978-1-61499-432-9-707http://dx.doi.org/10.3233/978-1-61499-432-9-707]
Oh J Y, Stuerzlinger W and Dadgari D. 2006. Group selection techniques for efficient 3D modeling//3D User Interfaces (3DUI’06). Alexandria, USA: IEEE: 95-102 [DOI: 10.1109/VR.2006.66http://dx.doi.org/10.1109/VR.2006.66]
Pickering C, Grignon J, Steven R, Guitart D and Byrne J. 2015. Publishing not perishing: how research students transition from novice to knowledgeable using systematic quantitative literature reviews. Studies in Higher Education, 40(10): 1756-1769 [DOI: 10.1080/03075079.2014.914907http://dx.doi.org/10.1080/03075079.2014.914907]
Ping J M, Liu Y and Weng D D. 2021. Review of depth perception in virtual and real fusion environment. Journal of Image and Graphics, 26(6): 1503-1520
平佳敏, 刘越, 翁冬冬. 2021. 虚实融合场景中的深度感知研究综述. 中国图象图形学报, 26(6): 1503-1520 [DOI: 10.11834/jig.210027http://dx.doi.org/10.11834/jig.210027]
Rasla A and Beyeler M. 2022. The relative importance of depth cues and semantic edges for indoor mobility using simulated prosthetic vision in immersive virtual reality//Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology. Tsukuba, Japan: ACM: 27 [DOI: 10.1145/3562939.3565620]
Reitmayr G and Schmalstieg D. 2005. Flexible parametrization of scene graphs//IEEE Proceedings. VR 2005. RealityVirtual, 2005. Bonn, Germany: IEEE: 51-58 [DOI: 10.1109/VR.2005.1492753http://dx.doi.org/10.1109/VR.2005.1492753]
Rivu S R and Burschka D. 2019. Context-aware 3D visualization of the dynamic environment//Proceedings of 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). San Jose, USA: IEEE: 42-47 [DOI: 10.1109/MIPR.2019.00016http://dx.doi.org/10.1109/MIPR.2019.00016]
Roesner F, Kohno T and Molnar D. 2014. Security and privacy for augmented reality systems. Communications of the ACM, 57(4): 88-96 [DOI: 10.1145/2580723.2580730http://dx.doi.org/10.1145/2580723.2580730]
Rokhsaritalemi S, Sadeghi-Niaraki A and Choi S M. 2020. A review on mixed reality: current trends, challenges and prospects. Applied Sciences, 10(2): #636 [DOI: 10.3390/app10020636http://dx.doi.org/10.3390/app10020636]
Rumiński D. 2015. Modeling spatial sound in contextual augmented reality environments//Proceedings of the 6th International Conference on Information, Intelligence, Systems and Applications (IISA). Corfu, Greece: IEEE: 1-6 [DOI: 10.1109/IISA.2015.7387982http://dx.doi.org/10.1109/IISA.2015.7387982]
Rumiński D and Walczak K. 2014. Dynamic composition of interactive AR scenes with the CARL language//Proceedings of 2014 IISA International Conference on Information, Intelligence, Systems and Applications. Chania, Greece: IEEE: 329-334 [DOI: 10.1109/IISA.2014.6878808http://dx.doi.org/10.1109/IISA.2014.6878808]
Sala N. 2021. Virtual reality, augmented reality, and mixed reality in education: a brief overview//Current and Prospective Applications of Virtual Reality in Higher Education. Hershey, USA: IGI Global: 48-73 [DOI: 10.4018/978-1-7998-4960-5.ch003http://dx.doi.org/10.4018/978-1-7998-4960-5.ch003]
Sato I, Sato Y and Ikeuchi K. 1999. Acquiring a radiance distribution to superimpose virtual objects onto a real scene. IEEE Transactions on Visualization and Computer Graphics, 5(1): 1-12 [DOI: 10.1109/2945.764865http://dx.doi.org/10.1109/2945.764865]
Savva M, Chang A X, Hanrahan P, Fisher M and Nießner M. 2014. SceneGrok: inferring action maps in 3D environments. ACM Transactions on Graphics, 33(6): #212 [DOI: 10.1145/2661229.2661230http://dx.doi.org/10.1145/2661229.2661230]
Speicher M, Hall B D and Nebeling M. 2019. What is mixed reality?//Proceedings of 2019 CHI Conference on Human Factors in Computing Systems. Glasgow, UK: ACM: #537 [DOI: 10.1145/3290605.3300767http://dx.doi.org/10.1145/3290605.3300767]
Stretton T, Cochrane T and Narayan V. 2018. Exploring mobile mixed reality in healthcare higher education: a systematic review. Research in Learning Technology, 26: #2131 [DOI: 10.25304/rlt.v26.2131http://dx.doi.org/10.25304/rlt.v26.2131]
Szczuko P. 2014. Augmented reality for privacy-sensitive visual monitoring//Proceedings of the 7th International Conference on Multimedia Communications, Services and Security. Krakow, Poland: Springer: 229-241 [DOI: 10.1007/978-3-319-07569-3_19http://dx.doi.org/10.1007/978-3-319-07569-3_19]
Tahara T, Seno T, Narita G and Ishikawa T. 2020. Retargetable AR: context-aware augmented reality in indoor scenes based on 3D scene graph//2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Recife, Brazil: IEEE: 249-255 [DOI: 10.1109/ISMAR-Adjunct51615.2020.00072http://dx.doi.org/10.1109/ISMAR-Adjunct51615.2020.00072]
Tutenel T, Van Der Linden R, Kraus M, Bollen B and Bidarra R. 2011. Procedural filters for customization of virtual worlds//Proceedings of the 2nd International Workshop on Procedural Content Generation in Games. Bordeaux, France: ACM: #5 [DOI: 10.1145/2000919.2000924http://dx.doi.org/10.1145/2000919.2000924]
Vishwanathan S V N, Schraudolph N N, Kondor R and Borgwardt K M. 2010. Graph kernels. Journal of Machine Learning Research, 11: 1201-1242
Wang L L, Cao A T, Li Z C, Yang X F and Popescu V. 2018. Effective free field of view scene exploration in VR and AR//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Munich, Germany: IEEE: 97-102 [DOI: 10.1109/ISMAR-Adjunct.2018.00043http://dx.doi.org/10.1109/ISMAR-Adjunct.2018.00043]
Wang M, Ye Z M, Shi J C and Yang Y L. 2021. Scene-context-aware indoor object selection and movement in VR//Proceedings of 2021 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Liisbon, Portugal: IEEE: 235-244 [DOI: 10.1109/VR50410.2021.00045http://dx.doi.org/10.1109/VR50410.2021.00045]
Xu K, Chen K, Fu H B, Sun W L and Hu S M. 2013. Sketch2Scene: sketch-based co-retrieval and co-placement of 3D models. ACM Transactions on Graphics, 32(4): #123 [DOI: 10.1145/2461912.2461968http://dx.doi.org/10.1145/2461912.2461968]
Yu H F, Liang W, Song S H, Ning B and Zhu Y X. 2021. Interactive context-aware furniture recommendation using mixed reality//Proceedings of 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). Lisbon, Portugal: IEEE: 450-451 [DOI: 10.1109/VRW52623.2021.00107http://dx.doi.org/10.1109/VRW52623.2021.00107]
Yu L F, Yeung S K, Tang C K, Terzopoulos D, Chan T F and Osher S J. 2011. Make it home: automatic optimization of furniture arrangement. ACM Transactions on Graphics, 30(4): #86 [DOI: 10.1145/1964921.1964981http://dx.doi.org/10.1145/1964921.1964981]
Yuen S C Y, Yaoyuneyong G and Johnson E. 2011. Augmented reality: an overview and five directions for AR in education. Journal of Educational Technology Development and Exchange, 4(1): 119-140 [DOI: 10.18785/jetde.0401.10http://dx.doi.org/10.18785/jetde.0401.10]
Zhou F, Duh H B L and Billinghurst M. 2008. Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR//2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. Cambridge, UK: IEEE: 193-202 [DOI: 10.1109/ISMAR.2008.4637362http://dx.doi.org/10.1109/ISMAR.2008.4637362]
相关作者
相关机构