Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Nonverbal Communication Factors for Successful Remote Collaboration in Augmented Reality, Summaries of Communication

This study investigates the importance of nonverbal communication factors for successful remote collaboration in augmented reality (AR). Participants played the game 'Keep Talking and Nobody Explodes' in close and remote collaboration conditions, using AR glasses like Aryzon and HoloLens 2. The research focuses on gesture recognition, intuitive orientation, and improved communication effectiveness. AR is a potential solution to the shortage of technically trained staff and can lead to shorter task completion times.

Typology: Summaries

2021/2022

Uploaded on 08/05/2022

aichlinn
aichlinn 🇮🇪

4.4

(45)

1.9K documents

1 / 23

Toggle sidebar

Related documents


Partial preview of the text

Download Nonverbal Communication Factors for Successful Remote Collaboration in Augmented Reality and more Summaries Communication in PDF only on Docsity! KEEP TALKING AND NOBODY EXPLODES: A qualitative study on nonverbal communication factors identified in a group problem-solving task Lara van Wijk Psychology student at the University of Twente Supervisor: Prof. Dr. J.M.C. Schraagen Professor at the University of Twente Abstract The aim of this study was to discover nonverbal communication factors that are necessary for successful remote collaboration in diminished reality. Nowadays there is a severe shortage of technically trained staff that are able to carry out, for example, maintenance or repair tasks. AR is a new technology that might a provide a solution to solve this gap in the job market. A between-subjects sign was employed to find the differences between a close collaboration condition, where participants could see each other, and a remote collaboration condition, where participants could not see each other but were only able to talk with each other for communication. Participants played the game Keep Talking and Nobody Explodes in groups of three, where participants had to defuse a virtual bomb. Two participants handled the Bomb Defusal Manual and one participant played the bomb defuser. Communication was categorised with a coding scheme which had the following main categories: pointing, gesturing, cross-checking and nonverbal answers. Pointing in a manual for a fellow participant was the only code that was statistically significant (p=0,016). This means that it occurred more in the remote collaboration condition. Cross-checking neared significance but did not reach it (p=0,056). For all other codes there was no difference between the remote and close collaboration groups. This is not in line with previous literature on the topic. Possible reasons for these results are a significant difference in age between the groups and outliers in the data that may be caused by personality or age. clear and concise in their communication to the novice. All in all, the expert needs to give clear and concise instructions, give instructions step-by-step and try to trigger already existing actions sequences of the novice. A suggested tool that the expert can use for this is a checklist for specific tasks, which also make for better guidance of the novice. There is some research on AR and factors that are needed for successful collaboration using AR. Some of this work is mentioned above, such as the article by Gurevich and colleagues in 2015. However, there have not been many studies that have looked explicitly at nonverbal communication factors that are essential to good communication in remote collaboration. This study will focus on the identification of such nonverbal communication factors in a problem-solving task. It will provide insights into what developers and users of augmented remote collaboration support should pay attention to in the future in order for it to be used to the best of its abilities. This topic has an important societal relevance due to the shortage of technically trained staff (Binvel et al., 2018). If remote collaboration via AR turns out to be as or more successful than only employing highly trained technical staff, it is a good solution to narrow the gap in the job market. It is also a solution that can lead to shorter task completion times because during a shift technicians can spend 45% of their time searching for and reading manuals with instructions (Braly, Nuernberger, & Kim, 2019). Using AR can reduce this time by employing an expert who does not have to search for and read the whole manual on site. Since some tasks are very time-sensitive this is also an important factor to consider for choosing to use AR. There is also scientific relevance to this topic as this has not yet been studied before. Different studies, such as the one by Gelb and colleagues (2011), touched upon factors that are important in the use of AR in collaboration. However, there has not been a study that focuses solely on explicitly stating which factors are important for AR to be successful and which need to be incorporated into the technology. Additionally, a call is made to use a between-subjects design as in studies about AR, within-subjects design are dominant (Dey, Billinghurst, Lindeman, & Swan, 2018). Within-subject designs can lead to issues because participants can have pre-existing knowledge of the technology and/or of the task. In this way, measures may only be focused on the differences for specific participants rather than to give an insight for a broader population. Therefore in this research the choice has been made for a between-subjects design that distinguishes between a close collaboration condition, which reflects a situation where colleagues are eye to eye, and a remote collaboration condition, which reflects a situation where colleagues cannot see, but instead just hear each other. This is more reminiscent of a real-life work situation where you need to be able to work with a number of different colleagues in a number of different environments. The aim is then to see what happens when team members work together in a diminished reality situation, i.e. taking away nonverbal communication. This leads to the following research question: “What nonverbal communication factors can be identified in team problem-solving tasks?” Method Participants The sample consisted of 30 participants (20 male, 10 female), with 15 participants in each condition. Within the conditions participants were divided into groups of 3 participants based on availability. The mean age of the close collaboration condition was 22 (SD = 2,88). For the remote collaboration condition the mean age was 35 (SD = 13,59). The difference in age between the groups is significant (t (28), p<0.001). In the close collaboration group 33,3% of participants was Dutch and 66,6% was German. In the remote collaboration group all participants were German. Furthermore, 3 participants had previously played the game, of which 2 were in the close collaboration group and the other one in the remote collaboration group. The only selection criterion for participants was to be able to have sufficient knowledge and skills of the English language. However, an exception was made for participants gathered a by fellow researcher. The sample was gathered in the personal networks of the researchers, as recruiting via the University of Twente test person system did not deliver any sign-ups to participate in the study. Ethical approval was obtained via the Ethical Committee of the BMS at the university. Materials The study was performed at two locations, based on availability of the participants. One location was a secluded room at the University of Twente. The second location was the basement of a private house where the experimental set-up was copied. The game Keep Talking and Nobody Explodes was purchased and used as the task that needed to be completed (Defuse a bomb with your friends, n.d.). Along with the game came a Bomb Defusal Manual (http://www.bombmanual.com/; version 1) which was used by two of the participants. Data collection was done by collecting video material of the participants and by asking them to fill out a questionnaire with basic demographic information. Participants were filmed using a GoPro Hero 5 Session, which filmed with resolution 3840x2160 and in high quality. Design The study used a between-groups design. In one condition (close collaboration), participants performed the task in the same room at a round table or a set-up that mimicked a round table. In the other condition (remote collaboration) participants worked in a room that was split by a makeshift wall which did not allow participants to see each other anymore. They could only communicate via talking. Procedure All participants speaking English were first informed about the purposes of the study via an information sheet and were then asked to sign an informed consent (see Appendix A and B). Participants then had to watch a series of YouTube videos (https://www.youtube.com/playlist?list=PLdC3pP79J-A9nuOv0g0oJc0uKej3E3cpT) that explained the game and the manual as a basic form of training. They did not watch all the videos, but only the modules that were included in the testing (see below). For German participants, the informed consent was translated into German, as were the game and the information sheet. As the basic form of explanation the fellow researcher showed a previously recorded video of himself where every module is solved once but with a German explanation. All participants then started a practice session where they had to perform two simple and two complex practice-sessions. Both the explanation of the manual and the practice session were used to control for the learning effects that occur when one is new to the game. After these sessions there was no objective test, such as task completion time to measure the basic skills of the groups before they went into the trials. During the video explanations all questions of participants about the game were answered. During the practice sessions only technical questions were answered, such as how to turn around the bomb. During the trials no questions were answered. 1. Pointing to the manual in order for someone else to see what is meant. For example: when participant 1, instruction provider, points to a page of participant 2’s manual to indicate the correct maze. 2. Pointing in a manual to indicate a location. For example: participant 1 is pointing in the manual to keep up with the location of the defuser in the maze. 3. Pointing otherwise: for example to a fellow participant to divide tasks. Cross checking by looking Participants who have a manual are cross checking each other by looking at manuals. For example: participant 1 with the manual is unsure whether she is looking at the correct maze, so she looks at where participant 2 is on that page. Nonverbal answers Participants that nod or shake their head to answer questions or commands of the participants that control the manual. The code was applied to all nonverbal behaviour in the transcription indicated between squared brackets, i.e. […]. These were transcribed nonverbal behaviours, e.g. [points to manual], and were considered as the unit of analysis. It was not indicated between which participants communication went. The sections were included from both the test-sessions, as the basis for the shared mental model was established there, and the trials. All sections that were coded could have multiple codes attached to it, depending on the interpretation of the researcher. Transcripts were cross-coded to assess the interrater reliability using the tool embedded in Atlas.ti. For further analysis of the data, IBM SPSS Statistics 23 and 25 was used. This was used to take a further look at the demographics using descriptive statistics and a t-test for age. Additionally, the Mann-Whitney U nonparametric test was used to analyse differences between the conditions and their respective number of codes. Results Outliers and issues In interpreting the data it must be stated that for group 1, 4, and 5 of the close collaboration condition the video of the defuser was either not filmed or there were technical difficulties with the filming. This mostly had an effect on the code gesturing and nonverbal answers. For group 1 gesturing was covered for the most part because the hand movements of the defuser were visible in the video for the people handling the manual. This occasionally occurred for group 4 and 5. However, since the number of groups per condition is small and there were no large differences, it was decided to keep them in the data set. Additionally, only after the first group it was decided that the test-sessions were also important for the full picture of how communication was established. This was, for example, obvious in communication about the Keypads module where participants often gestured to support what they were stating in their descriptions of specific symbols. Therefore, test-session coding for group 1 in the close collaboration condition is not reported. Within the data there were some outliers. For the code ‘Nonverbal answers’ one group is considered as an outlier, group 2 in the remote collaboration condition. For group 2, nonverbal answers were quite evenly distributed across two of the three participants. Group 5 of the close collaboration condition and group 4 of the remote collaboration condition are also possible outliers (see Appendix C, Figure 3). Another outlier was group 4 in the close collaboration condition for the code ‘Pointing in a manual for yourself’(see Appendix C, Figure 4). In this group the display of this behaviour was quite evenly distributed across the two participants handling the manual. Even though there were a number of outliers, it was decided to keep them in the data set because of the limited number of participants. The purpose of describing them here is to show the distribution of the codes. Comparison between conditions Firstly interrater reliability was assessed using the Krippendorff’s c-alpha binary in Atlas.ti (α = 0.952). In Table 4 (see Appendix D) the distribution of the codes over the conditions can be found. A Friedman test was performed to test for statistical significance. There was no statistically significant difference between the number of codes when comparing the close collaboration condition to the remote collaboration condition (χ2(2) = 10,714, p=0,098). The Independent Samples Mann-Whitney U Test was employed to analyse the data nonparametrically. A statistically significant difference between conditions can be seen for the code ‘pointing in a manual for a fellow participant’ (U=1, p=0,016). The code ‘cross- checking’ approached significance (U=3, p=0,056). For all other codes there was no significant difference between the close and remote collaboration condition. Table 1 Median and Mann-Whitney U Test Statistic for the Close Collaboration and Remote Collaboration Conditions Discussion The aim of this research was to answer the research question: what nonverbal communication factors can be identified in team problem-solving tasks? The results showed that only pointing in a manual for someone else could be identified as being a nonverbal behaviour that occurred more often in the remote collaboration condition than in the close collaboration condition. Additionally, cross checking came very close to the same result. Pointing in a manual for fellow participants means that a shared visual context is established, which was deemed important by Gurevich and his colleagues (2015). A shared visual context is also important Close collaboration Remote collaboration Both conditions N Mdn N Mdn Mann- Whitney U Exact Sig. Cross-checking 5 4 5 10 3,00 0,056 Gesturing not related to the game 5 2 5 3 11,50 0,841 Gesturing related to the game 5 11 5 5 9,00 0,548 Nonverbal answers 5 12 5 27 6,00 0,222 Pointing to manual for fellow participant 5 4 5 9 1,00 0,016 Pointing to manual for yourself 5 11 5 67 10,00 0,690 Pointing otherwise 5 1 5 0 10,50 0,690 in AR, both qualitatively and quantitatively. This research did provide some additional insights or confirmations to the already existing body of literature on communication and AR, and the topic will remain interesting for the years to come as the technology becomes more ingrained in the business world. References Aryzon - 3D Augmented Reality Headset. (n.d.). Retrieved February 27, 2019, from https://www.aryzon.com/ Binvel, Y., Franzino, M., Laouchez, J., & Penk, W. (2018). Future of work: The Global Talent Crunch(Rep.). Korn Ferry Institute. Braly, A. M., Nuernberger, B., & Kim, S. Y. (2019). Augmented Reality Improves Procedural Work on an International Space Station Science Instrument. Human Factors: The Journal of the Human Factors and Ergonomics Society. doi:10.1177/0018720818824464 Clark, R., Freedberg, M., Hazeltine, E., & Voss, M. W. (2015). Are There Age-Related Differences in the Ability to Learn Configural Responses? PLOS ONE, 10(8). https://doi.org/10.371/journal.pone.0137260 Darken, R. P., & Peterson, B. (2002). Spatial Orientation, Wayfinding, and Representation. Handbook of Virtual Environments. K. Stanney. Lawrence Erlbaum. Dey, A., Billinghurst, M., Lindeman, R. W., & Swan, J. E. (2018). A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Frontiers in Robotics and AI,5. doi:10.3389/frobt.2018.00037 Fletcher, T.D., & Major, D. A. (2017). The Effects of Communication Modality on Performance and Self-Ratings of Teamwork Components. Journal of Computer Mediated communication, 11(2), 557-576. https://doi.org/10.1111/j.1083- 6101.2006.00027.x Gelb, D., Subramanian, A., & Tan, K. H. (2011). Augmented Reality for Immersive Remote Collaboration. In 2011 IEEE Workshop on Person-Oriented Vision, POV 2011 (pp. 1- 6). https://doi.org/10.1109/POV.2011.5712368 Gurevich, P., Lanir, J., & Cohen, B. (2015). Design and Implementation of TeleAdvisor: a Projection-Based Augmented Reality System for Remote Collaboration. Computer Supported Cooperative Work (CSCW), 24(6), 527-562. https://doi.org/10.1007/s10606-015-9232-7 HoloLens 2. (n.d.). Retrieved February 27, 2019, from https://www.microsoft.com/da- K/hololens Johnson, J. (2014). Our Attention is Limited; Our Memory is Imperfect. In Designing with the Mind in Mind(2nd ed., pp. 87-105). Elsevier Science & Technology. doi:10.1145/2702613.2706667 Kieras, D., & Polson, P. G. (1985). An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22(4), 365-394. https://doi.org/10.1016/S0020-7373(85)80045-6 Smith, E. E., & Kosslyn, S. M. (2007). Cognitive Psychology: Mind and brain(Vol. 6). Pearson/Prentice Hall. Appendix B: Information sheet Dear participant, In this experiment you are going to play the computer game “Keep Talking And Nobody Explodes”. The purpose of this game is to solve a virtual bomb with your team within a time period of five minutes. The virtual bomb consists of three different small riddles, which you need to solve in order to defuse the bomb successfully. You will occupy one of two roles which are needed to defuse the bomb, namely you are either the participant who is defusing the bomb or one of the instruction providers, also known as experts. The participant who is defusing the bomb can see the bomb on the screen, but does not know how to defuse it. The experts will receive a manual, which includes rules on how to solve the small riddles on the bomb and defuse the bomb consequently. These rules will be communicated by the experts to the participant who is defusing the bomb in a verbal manner. Thus, you need to communicate to defuse the bomb. The playing of the game could lead to stressful reactions of some participants, caused by time pressure. The experimental setup which also includes the stress factor was reviewed and approved by the BMS Ethics Committee. The participation in this study is voluntary and you have the right to withdraw form the study at any point in time during the process of the study or afterwards. If you want to withdraw after the study is over you can contact us via email. If you decide to withdraw from the study during the experiment or afterwards, the collected data will be deleted and will not be used for any publication purposes. Before the start of the experiment basic demographic information will be collected. This information includes age, gender and nationality. The data will be collected in an anonymized manner. During the experiment other data will be collected. The sessions will be video recorded for retrospective analysis purposes. The video data will be stored on a hard drive which is secured by a password. This password is only known to the researchers themselves. When the study is finished, the video material will be deleted. Additionally, after each trial of the experiment the task completion time of your team will be transcribed. The transcribed completion times will not include any personalized data. This data will be stored for a retention period of five years after the submission of the thesis. The collected data will be used for the publication of a bachelor thesis. The video data will be coded and analyzed afterwards. The task completions times are used for statistical analysis. You have the right to inspect the collected data from the experiment where you participated at any time point. Even after the experiment has ended you can contact us in order to schedule an appointment for inspection purposes. Contact information: Ethics committee: ethicscommittee-bms@utwente.nl Enschede, 15th of April, 2019 Appendix C: Boxplots for outliers Figure 3 Boxplot Nonverbal Answers per Condition Figure 4 Boxplot Pointing in a Manual for Yourself per Condition Appendix D: Distribution of Codes over Group and Conditions Table 2 Distribution of Codes over Close Collaboration Condition Codes Group 1 Group 2 Group 3 Group 4 Group 5 Totals Cross-checking 4 4 7 10 2 27 Gesturing not related to things in the game 6 1 2 5 1 15 Gesturing related to things in the game 4 9 16 11 18 58 Nonverbal answers 2 12 3 12 33 62 Pointing in a manual for a fellow participant 4 5 2 3 7 21 Pointing in a manual for yourself 18 80 111 216 125 550 Pointing otherwise 0 1 3 1 0 5 Totals 38 112 144 258 186 738 Table 3 Distribution of Codes over Remote Collaboration Condition Codes Group 1 Group 2 Group 3 Group 4 Group 5 Totals Cross-checking 15 10 11 5 10 51 Gesturing not related to things in the game 4 3 8 0 2 17 Gesturing related to things in the game 18 15 5 3 4 45 Nonverbal answers 23 95 27 5 31 181 Pointing in a manual for a fellow participant 9 9 13 9 6 46 Pointing in a manual for yourself 14 67 44 143 133 401 Pointing otherwise 0 1 3 0 0 4 Totals 83 200 111 165 186 745 Table 4 Distribution of Codes across Conditions Codes Remote collaboration Condition Close Collaboration Condition Totals Cross-checking 51 27 78
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved