Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Story CreatAR: An Authoring Toolkit for Spatially-Adaptive Augmented Reality Storytelling, Summaries of Storytelling

Story CreatAR is an authoring tool designed to help authors plan, experiment with, and reflect upon spatial relationships between story elements and environments for immersive augmented reality (AR) and virtual reality (VR) storytelling. It integrates spatial analysis methods from architecture and urban planning, and socio-spatial theory to facilitate the development of site-specific and site-adaptive narrative experiences. Story CreatAR allows authors to define placement and traversal rules, tie interactive story elements to spatial characteristics and proxemic relationships, group elements, and define formation rules. It also includes ProxemicUI to respond to proxemic events and restrict placement of story elements based on room accessibility and centrality.

Typology: Summaries

2021/2022

Uploaded on 08/05/2022

char_s67
char_s67 🇱🇺

4.5

(109)

1.9K documents

Partial preview of the text

Download Story CreatAR: An Authoring Toolkit for Spatially-Adaptive Augmented Reality Storytelling and more Summaries Storytelling in PDF only on Docsity! Story CreatAR: a Toolkit for Spatially-Adaptive Augmented Reality Storytelling Abbey Singh* Dalhousie University, Halifax, Canada Ramanpreet Kaur† Dalhousie University, Halifax, Canada Peter Haltner‡ Dalhousie University, Halifax, Canada Matthew Peachey§ Dalhousie University, Halifax, Canada Mar Gonzalez-Franco¶ Microsoft Research, Redmond, Washington, USA Joseph Malloch|| Dalhousie University, Halifax, Canada Derek Reilly** Dalhousie University, Halifax, Canada ABSTRACT Headworn Augmented Reality (AR) and Virtual Reality (VR) dis- plays are an exciting new medium for locative storytelling. Authors face challenges planning and testing the placement of story ele- ments when the story is experienced in multiple locations or the environment is large or complex. We present Story CreatAR, the first locative AR/VR authoring tool that integrates spatial analysis techniques. Story CreatAR is designed to help authors think about, experiment with, and reflect upon spatial relationships between story elements, and between their story and the environment. We moti- vate and validate our design through developing different locative AR/VR stories with several authors. Keywords: augmented reality, space syntax, storytelling, prox- emics, f-formations, authoring toolkit, head-mounted display Index Terms: Human-centered computing—Human computer interaction (HCI)—Interactive systems and tools—User interface toolkits; Human-centered computing—Human computer interac- tion (HCI)—Interaction paradigms—Mixed / augmented reality; Human-centered computing—Interaction design—Interaction de- sign process and methods—User centered design 1 INTRODUCTION Augmented reality (AR) head-mounted displays (HMDs) provide an immersive, hands free experience. Untethered AR HMDs such as the Microsoft HoloLens [37] and Magic Leap One [2] permit indoor AR experiences that can span multiple rooms, and even entire buildings. This creates new opportunities for locative storytelling, defined in this paper to include both stories that adapt to the physical environment they are experienced in, and stories that are created for specific locations. Authors can use an immersive HMD to present characters and objects at-scale as seen from the user’s visual field of view (FoV), albeit constrained by the display’s FoV. This differs from handheld AR: when the viewer must hold the screen, the content they will see when entering a room is less predictable [47], and it is more difficult to see and interact with “life-sized” content such as a human avatar, particularly as the viewer approaches the content. Since recent untethered AR (and VR) HMDs employ inside-out tracking, authors can precisely place story elements anywhere in a room *Abbey.Singh@dal.ca †rm216536@dal.ca ‡Peter.Haltner@dal.ca §peacheym@dal.ca ¶margon@microsoft.com ||jmalloch@dal.ca **reilly@cs.dal.ca and across multiple rooms without the need for physical markers. By contrast, marker-based approaches limit content placement to regions where the marker is clearly visible to the device’s camera. AR HMDs offer to authors the promise of rich integration of vir- tual story elements and physical environments. When the physical location is unavailable, the experience can be simulated on a VR HMD that renders both the content and its intended setting. When authors know the location of their story, they can use physical navi- gation, visual unfolding, proximity and orientation, and other spatial aspects as tools for effective AR storytelling. However, authors face significant impediments to accomplishing such integration. As a story becomes more complex or covers a larger space, manual placement can become time consuming. Such that it can be diffi- cult to consider all visual perspectives and physical approaches for placing story elements so that they integrate seamlessly with the environment. Content placement is made yet more difficult when a story is meant to be experienced in different or unknown locations. For example, an author may want a flexible story to take place in sev- eral different locations or to adapt to an unknown location. While approaches have been explored for spatially aware or site adaptive AR/VR [16, 21, 36, 50], just-in-time adaptation and room-scale tech- niques find the local optimal placement due to the limited amount of environment knowledge. This does not necessarily find the global optimal placement for story content based on the layout of an entire building. In this paper we present Story CreatAR, the first authoring tool for locative media that integrates spatial analysis methods from architecture and urban planning (space syntax) and socio-spatial theory from the social sciences (proxemics and F-formation theory) to facilitate the development of both site-specific and site-adaptive narrative experiences for immersive AR and their simulation in VR. Story CreatAR is designed to help authors think about, plan, and test the spatial organization of their story, independent of the envi- ronment their story will be experienced in. It aligns story elements with spatial qualities that enhance their meaning and impact. Using Story CreatAR, authors can: • define placement rules, indicating how story elements should be placed based on spatial characteristics (e.g., visual complex- ity, openness, visual integration), • define traversal rules, assigning navigation behaviour to char- acters based on distance, urgency, and spatial features, • tie interactive story elements to spatial characteristics and prox- emic relationships, • group elements so they can be placed together, and define a series of groupings with overlapping membership to express a story’s event sequence, and • define formation rules, fine tuning how groups of elements are placed using F-formations, proxemics, and room boundaries. Story CreatAR is designed to support an authoring workflow based on these five key features. It provides authors with access to rich humanoid avatars [39] and their animations [24], spatialized audio, and objects they source themselves. The author can flexibly label and group this content in ways that are meaningful for their story, with motivation that it may be easier to structure events and apply placement, formation, and traversal rules. Basic spatial char- acteristics can be combined and labelled into what we call attributes: high-level spatial properties with names that are meaningful to the author (e.g., hidden area, broad vista, meeting place). Authors can load floor plans of target locations if known (or sample locations if not), triggering automatic spatial analysis. Authors can directly view how story elements will be placed based on the defined rules. When the author is satisfied with the story or wants to test an intermediate draft, a Unity Scene [45] is generated, which can be used for post- processing, testing or deployment in VR (using Oculus Quest), or deployment onsite in AR (using HoloLens 2). The conception and design of Story CreatAR followed an iterative user-centered design process, working closely with new media artists and authors from inception to refinement, in order to identify desired features and to evaluate our design. In this paper we detail our design process, including the development of several locative stories for AR HMDs that can also be experienced in VR. We then describe the Story CreatAR interface and systems implementation, and outline directions for future work. 2 RELATED WORK 2.1 Spatial Analysis 2.1.1 Space Syntax Space syntax [7] refers to a body of metrics and analysis techniques for understanding the spatial characteristics of a building or urban area, and is often used for predicting how people will engage with and flow through that space. A detailed background of space syntax is beyond the scope of this paper, but is readily found elsewhere [4]. Space is measured and classified using a number of techniques: axial (line of sight or traversal), isovist (visible area from a point), convex (enclosed region), manual classification (e.g., room, building, road), or segment (each segment identified by some primary measure such as line of sight). From these, several spatial characteristics of an individual location can be derived, including its openness (a measure of the open space surrounding a location), its visual complexity (roughly a measure of the number of different vantage points from which a location is visible), and its visual integration (a measure of how likely a location will become visible as a person moves about a space). Locations with varying degrees of openness and visual complexity are shown in Figure 1. Through agent-based analysis we can generate a location’s gate count (the likelihood a location will be crossed as one traverses the space). Segment analysis by room includes constructing a connectivity graph, where rooms are nodes and connecting doorways are edges. With this graph we can determine a room’s centrality (how likely it is that a room will be encountered as one moves through the space), and accessibility (how easy it is to access from anywhere in the building). Many other measures are defined within space syntax, and space syntax analyses often combine different techniques. Various tools integrate space syntax analysis including Grasshopper [12] by Rhino, UCL depthMapX [18], and QGIS [3]. In Story CreatAR we use depthMapX. depthmapX [46] is an open source spatial analysis tool devel- oped at the Space Syntax Laboratory, University College London. DepthmapX facilitates spatial analysis at a building or urban scale by analyzing floor plans or city maps, respectively. Isovist-based spatial properties are computed using a visibility graph analysis, while agent analysis provides metrics relevant to movement flow and behaviour. The tool also provides segment-based and other anal- yses. DepthmapX takes input in a vector graphics format such as Drawing Exchange Format (DXF); raster images of 2D floor plans will first need to be converted into vector format using a tool such as Figure 1: Left: Location with high openness. Middle: Location with high visual complexity. Right: Location with low values for both. Inkscape [32] or AFPlan [49]. Story CreatAR integrates depthMapX via its command-line interface and CSV output containing space syntax analysis values for each position in the floor plan. 2.1.2 Proxemics Proxemics [8–10, 25–27, 34] refers to human use and subjective ex- perience of space. Proxemic distance can be categorized as intimate, personal, social, and public. Intimate space indicates that you are in close proximity to another person, typically with a loved one. Per- sonal space is the type of space where you are talking with a friend. Social space is often used in group conversation, such as talking to colleagues. Public space refers to the space between strangers in a park, for example. Notably, the physical distances for proxemic spaces differ between cultures. Therefore, adapting distances for target users is important. ProxemicUI [8] is a software framework for proxemics-aware applications. ProxemicUI processes Open Sound Control (OSC) messages containing position and orientation information for tracked entities captured by a tracking source (e.g., Azure Kinect). It uses this information to create proxemic events, which are only exe- cuted when tracking data and external event data meet the require- ments for one or more associated user-defined relative/absolute dis- tance/orientation rules. Story CreatAR incorporates ProxemicUI to respond to proxemic events between avatars and the viewer. 2.1.3 F-formations F-formations [17, 22, 31, 33, 35] describe the spatial relationships between people in social spaces. F-formations occur when multiple people are involved in an interaction and orient themselves with respect to the space they inhabit and to each other. There are three different social spaces: o-space, p-space, and r-space. O-space is the region where the social interactions take place. The source of interactions like people or objects face this region. P-space refers to the space where the objects involved in the interaction reside. R- space is the region beyond the p-space. The position and orientation of people walking in pairs, playing in a field, eating at a table can be represented as F-formations. F-formations inform how avatars are grouped together in Story CreatAR. 2.1.4 Room-based analysis Room-based analysis techniques permit the identification of rooms given a floor plan. Over the past two decades, a range of techniques have been demonstrated, working with vector graphics [19] or raster images [1, 49], sometimes to extract a 3D representation [40], or for navigation support [48]. In Story CreatAR this is used to help authors specify placement boundaries for story assets. For example, consider placing three pieces of furniture in the same room, named “Common Room”. Room-based analysis can associate this “Common Room” with a room in a target setting based on room properties. We can use the output of room-based analysis to construct a connectivity graph automatically, allowing authors to associate story assets with rooms that are difficult to access (low accessibility), or ones that are likely to experience a lot of foot traffic (high centrality). 3.2 Phase Two In the second phase, we collaborated with three media studies stu- dents at Dalhousie University to implement three longer stories using Story CreatAR. None of the students had experience creating locative AR or VR stories, but were interested in experimenting with the new medium. During this phase we met weekly with the authors, demoing Story CreatAR features, discussing potential uses of spatial analysis for immersive AR narratives, and exploring examples of AR and VR narrative in literature and on commercial platforms. When writing their stories the authors were asked not to constrain their narratives to fit the current capabilities of the tool, but rather that we would implement features to meet the needs of their stories. Story drafts were made available to the development team, who identified additional Story CreatAR requirements implicit in the story details. When story drafts were ready to be implemented, we followed a different approach with each author. In the first instance the author provided a complete draft and a developer used Story CreatAR to generate the story. In the second, the author and a de- veloper worked together to produce a graph-based representation of the story whose elements mapped to Story CreatAR rules and fea- tures. For the third story, the author used Story CreatAR directly in a think-aloud approach and was able to ask the developer questions. 4 CASE STUDIES In this section we describe three of the four stories written in parallel with the development of Story CreatAR—The Lost Child, Tyson’s Peak, and Standville Museum—and reflect on how they motivated design decisions and Story CreatAR features. The fourth story (Spill) informs our future work and is described in section 6.1. 4.1 The Lost Child The Lost Child was created in collaboration with Colton for the Snap Creative Challenge. It is an interactive story about a concerned father who has lost track of his son’s whereabouts. The viewer assists the father in locating the lost boy. At that time Story CreatAR was in an early state of implementation, so the story was implemented directly in Unity, while still considering proxemic events, spatialized dialogue, custom animations, and space syntax-based placement. 4.1.1 Impact on Story CreatAR Design This story primarily relies on dialogue usage, proxemic events, avatar animations and placement. Therefore, it suggests that the tool needs to provide a way to attach dialogue and custom animations to an avatar, and to play that dialogue when the player is within a spe- cific distance of the avatar. The story also emphasizes the need to arrange elements such as avatars into a given formation: in this story, concerned bystanders gathered around the father. 4.2 Tyson’s Peak Figure 3: Avatars gathered in the common room for Tyson’s Peak. Tyson’s Peak authored by Tory is a murder mystery screenplay. Caught in a snowstorm, the viewer is taken in by a group of eight friends in their ski cabin. Trapped in the cabin by snow, one member is murdered with poison. The player listens and converses with the characters to learn more about each character through this story filled with deceit and affairs. Portions of Tyson’s Peak are presented in our video figure. Tory has limited technological expertise, but has won several creative writing prizes. She first wrote a complete draft, and then a developer implemented the first 7 minutes of the story. The im- plementation process used the placement functionality of Story Cre- atAR completed in phase one to define the avatars and props of the story with their associated space syntax attributes. The story imple- mentation first included preparing required assets such as props. The draft included high-fidelity details regarding the props, however we were limited to royalty free models which were used as substitutes. Avatar-based story elements were created in Story CreatAR, and third-party models were imported for props. Tory was provided with a list of potential avatars to choose from with the option to request custom avatars. Voice lines for dialogue were recorded, edited, and imported. Functionality for configuring conversations and dialogue was developed alongside this story and implemented directly in Unity. Once the assets were placed using Story CreatAR, conversa- tion nodes—a way to gather avatars in conversation—were manually attached as components. These conversation nodes initialize where the avatars spawn. Then, a list of conversation players were used to define dialogue between avatars. Lastly, a state controller was used to progress the story utilizing timers to trigger traversal events. A recording of the VR story playback was shared with Tory as she did not have a VR headset and ongoing COVID-19 restrictions were in place. The reaction from Tory was positive, complimenting the asset usage and flow of the story. The choices from the Tory’s story design were reflected in the final product. For example, the looping dialogue was noted by the author’s supervisor ”I found this really good for teaching the participant how the game works – it’s anti-realistic, but effective”. Tory responded with ”this was a choice – and based on experience of video games (Pokemon and Zelda) which use this to cue the participant to move on”. 4.2.1 Impact on Story CreatAR Design This case study clearly identified functional requirements including the need to place avatars in a conversation, add dialogue, group and manage assets, and create and assign animations to avatars. We exposed the ability to create generic conversations with associated dialogue in the Story CreatAR UI, helping to automate the scene cre- ation process. Moreover, Tyson’s Peak illustrated the need to place conversation nodes using placement rules, since individual avatars may come and go during the conversation. Avatars in the conversa- tion were assigned dialogue, but custom animations were required to make them appear more life-like. This case study also illustrated the need to manage and group assets: grouping organizes the large amount of assets required for the story into different categories, and is a useful way to apply the same placement, formation, or transition rules to many story elements at once. This case study further showed the need for generalized event management to progress the story as there was little author support for that at the time. Traversal and timer events were added to progress story action. 4.3 Standville Museum David, a novice computer user and experienced book writer, au- thored Standville Museum. David and their supervisor directly used Story CreatAR to create the initial placement of assets in his story. In the story, the player takes the role of a detective who entered an art museum with his son. This story takes an ominous turn when the detective’s son gets kidnapped, and follows multiple trajectories leading to different endings, based on user decisions. The objec- tive in the story is to find the detective’s son by solving riddles and finding clues. Due to COVID-19 restrictions, David and the developer met re- motely, he used Remote Desktop to control the developer’s screen to use Story CreatAR. First, David added story elements: avatars and objects. Notably, on his first time through he moved on to the placement screen without creating any groups for the story elements, even though they are located on the same screen. Later, David expressed his confusion towards the utility of groups. However, grouping story elements for placement would be useful in his story as he wants to place multiple objects within a room. Second, he added different default attributes to his story elements to specify placement constraints. For example, one of the protagonists were given the “Open Area” attribute. Significantly, David did not create new attributes or focus on the lower-level space syntax characteris- tics when placing his story elements, even though he was exposed to these characteristics as part of his collaboration. David reapplied the rules to view different placements of the story elements. Then, he saved his scene and made minor adjustments to the placement of the story elements. As David was a first time user of Unity 3D, he had difficulty making the minor adjustments in placement and orientation to his story elements and was aided by a developer. 4.3.1 Impact on Story CreatAR Design Standville Museum required some story elements to be placed in the same room, and that certain rooms be connected to each other. This led to the incorporation of the room-based floor plan analysis tool, AFPlan [49]. David also wanted precise control over where certain clues were placed; this suggested a need for placement specifications such as “On wall”, “Against wall”, and “In Room X”, which could be accomplished by integrating a tool such as FLARE [21]. To pro- vide precise control, we added the ability to place objects in rooms and for more specific placement David will have to perform manual adjustments to the objects placement. Two other features, however, were not implemented due to prioritization and time constraints. The first is the ability to make minor adjustments to story element placement through a drag and drop interface in Story CreatAR it- self instead of waiting until the Unity Scene is generated. Second, showing images of third party objects instead of a file name when creating story elements. Overall, David enjoyed Story CreatAR, stating, “I liked that we don’t have to worry about going around the map and placing everything”. He provided a number of recommendations to improve the UI that we plan to incorporate such as visually illustrating that a spatial attribute is being assigned to individual story elements when it is assigned to a group those elements are members of. 5 SYSTEM DESIGN 5.1 Overview Story CreatAR was developed as a Unity plugin. Unity [45] is a robust game engine for cross-platform development including AR/VR. Story CreatAR will become an open source software upon publication. A novel user could independently run Story CreatAR in Unity and use it to implement their story. They could easily create avatar-driven narratives, however, they would require developer support for advanced features or story specific events (e.g., spinning around a stump three times). Figure 4 shows an overview of how Story CreatAR is used. The author creates their story by specifying and grouping story elements and events, and then adds constraints for their location and behaviour by way of placement, formation, and traversal rules. Spatial analysis tools (depthMapX and AFPlan) are used to import spatial attributes for a deployment or testing environment. Then, the author can generate a Unity scene to visualize the placement in the selected environment. The author can additionally test the experience of the story in VR, or onsite in AR. The author can reiterate this process until they are satisfied with their generated experience. Figure 4: Overview of Story CreatAR 5.2 Authoring Support 5.2.1 Content Creation Story CreatAR provides a range of ways to create and import con- tent, including human characters (avatars), props such as tables or chairs, and 3D sound. Content can be third-party assets, the author’s own work, or the default assets from Story CreatAR. Content associ- ated with a name is known as a story element in the interface. An additional content type known as compound story elements allow authors to create conversation nodes, used to create conversations between avatars or between avatars and the player. Story elements may be part of 0-to-many groups. Groups have a unique identi- fier and consist of 1-to-many story elements. The default assets in our system consist of rigged avatar-based story elements from the Rocketbox Avatars [39]. MoveBox [24] is an open source toolkit that allows content creators to view and record motion capture data directly in Unity in real time using a single depth camera such as Azure Kinect or Kinect V2. Authors are able to integrate both the Microsoft Rocketbox avatars and the MoveBox toolkit to generate realistic human characters for their stories. 5.2.2 Asset Placement Asset placement depends on spatial analysis of the environment. Spatial attributes are used to place story elements in appropriate lo- cations. This allows authors to specify generic placement rules once and play their story in many environments. Higher level placement rules called attributes can be associated with story elements. Each attribute has a meaningful name, which is associated with one or more spatial characteristics. Currently the spatial characteristics we support are openness, visual complexity, and visual integration, and a range of values for each spatial characteristic: lowest (lowest 0 - 20% of values), low (lowest 20 - 40%), moderate (40 - 60%), high (60 - 80%), highest (80 - 100%), or any (0 - 100%). The attribute names (e.g., “Hidden”) and values (e.g., “lowest”) were derived from how the authors spatially described elements in their story drafts. To illustrate an attribute, an author might place a garbage can in a Hidden space, which is associated with the lowest visual complexity values and low visual integration values. However, a clue may be placed in an Easy to Find and Open Area space. Table 1 illustrates a sample of attributes and their spatial meaning that the author has access to in the interface. Additionally, the author can create custom attributes by specifying a unique name and the values for the space syntax characteristics. A story element’s own attributes have a higher priority than group attributes so that the author can provide unique placement for story elements within a group. The ordering of each is based on the order they were added, with first added having highest priority. Authors Attribute Openness Visual Visual Name Complexity Integration Hidden Any Lowest Low Easy to Find Any Any Highest Open Area Highest Any Any Random Any Any Any Table 1: Attributes with their corresponding spatial characteristics. can also override attribute priorities for each individual story element by choosing to edit the story element and inputting the priority (1 = highest, n = lowest) for each attribute until they are in the desired ordering. For example, Bob is in the group “Main characters”. The author wants Bob to be placed in an open area next to the other main characters, but if the location does not permit that, then Bob should hide. Therefore, Main characters will have the attribute open area and Bob will have the attribute hidden. By default, the priority for Bob is (1) Hidden (2) Open Area. To get the desired outcome, the author needs to prioritize open area as (1). Using the attributes and priorities, the initial placement of story elements is formed. Conversation nodes can be placed in the same way. After the author is satisfied with how the tool placed the story elements, the author can save the story element’s positions into a new Unity Scene. The author can then choose to make manual micro adjustments to each asset’s placement in the Unity Scene. In the Story CreatAR interface there is also the ability to create rooms. A room is specified by the size of the room (small, medium, large), the number of entrance points the room has (1,2, many), and the other rooms it should be directly connected to. These properties are specified so that the created room is automatically associated with a physical room on the different maps. The theoretical rooms can be specified as a placement rule in the interface such that the object with the rule attached has to be placed within that room. 5.2.3 Event Configuration In the Story CreatAR interface, the author may create conversation nodes, which is the currently supported compound story element. Conversation nodes have different properties: type of conversa- tion, formation, initial and all avatars placed, out of range audio, and conversation dialogue for when the avatars are talking amongst them- selves and when they are talking to the viewer. Type of conversation is either intimate, personal, or social, which refers to the proximity of the avatars. Formations [17] are either circle, semi-circle, or line, which refers to how the avatars are placed. The simplistic terminol- ogy used to define the formations in the interface hides the inherit complexity of spatial analysis, while making it easier for authors to understand. Initial avatars include the avatars in the conversation at story’s start time. All avatars include avatars that will be in the conversation at some point during the story, but may not be there at the story’s start time. Out of range audio such as crowd noise can be attached for when the player is out of range of a conversation node. Dialogue players can be attached to a conversation to create different sets of dialogue. Conversation dialogue requires a trigger to activate, which for example can be a proxemic relative distance rule where the player must come within 2.5 meters of the conversation [8]. When specifying the conversation dialogue the author can choose if it is looping, the wait time between loops (in seconds), as well as a set of voice lines. Each voice line specifies an actor (avatar) to say the voice lines, the audio file, and the volume of the audio. The conversation node is placed based on the attributes specified in the placement phase. Each conversation location requires a list of initially placed avatars (0-to-many), the formation, and proximity [8], which can be used to place avatars uniformly around the f-formation shape. Story CreatAR has options for specifying two types of global events: traversal and timer events. In the traversal event the author specifies an avatar to move (from avatars created in their story), where it must move to (if it is part of the conversation node, the conversation node will be an option), and the speed of movement (walk, fast walk, run). The avatar moves to the destination using A* path finding. In the timer event the author specifies the amount of time to wait for. These aid in the progression of the story. 5.3 Spatial Analysis 5.3.1 Axial/Isovist Analysis DepthmapX is used to perform visibility (i.e., based on “line of sight” and isovists) analysis of a floor plan. From the CLI we can load a floor plan, calculate its isovist properties and visibility relations through depthmapX’s visibility graph analysis. Then we retrieve the result in a CSV format consisting of space syntax characteristics including visual integration, isovist area (openness), and isovist perimeter (visual complexity). These values are extracted from the CSV and converted to ranges used in the interface (0 - 100). The CSV output is saved, so that authors can swap floor plans without needing to recalculate the space syntax values every time. 5.3.2 Convex/Segment Analysis Our current support for convex analysis involves extracting a connec- tivity graph, which assigns IDs to rooms and indicates how rooms are connected (i.e., through doorways). This data structure forms the basis of a range of convex analyses in space syntax. To obtain the connectivity graph we use the open source AFPlan tool, which identifies rooms and doorways in the vector floor plan. At present room ID and geometry data are used to ensure that story elements are placed in the same room. 5.3.3 Placement Generation When spatial analysis of a particular floor plan is complete, and an author has constructed a set of placement rules they would like to test, story elements (including avatars, props, conversations) can be placed. To do this we have adapted the open source Unity plugin created by Reilly et al. [41]. Story CreatAR generates the JSON ruleset required by the plugin, and a greedy algorithm is followed to satisfy the constraints; lower priority rules are relaxed as necessary to find a solution given a target floor plan. 5.3.4 Proxemics and F-formations We integrated the ProxemicUI framework [8] to use proxemic rela- tionships for conversation nodes. Story elements in Story CreatAR are assigned trackers which monitor their position and orientation in space so they can be used as entities in ProxemicUI rules. For exam- ple, a rule might detect when the player tracker is within the range of 2 to 3 meters of any entity OR facing an entity. Proxemic rules are currently used in Story CreatAR for avatar interaction, in partic- ular between the player and conversation nodes as the player first approaches a group and then participates in a group conversation. 5.3.5 Avatar Formation Rules All stories included avatar-to-avatar conversation as a key narrative element. These conversations need to support resizing as avatars enter/leave conversations as the story progresses. Conversations can happen in different F-formation arrangements (e.g., a circle). For example, in Story CreatAR given a list of avatars, a formation, and proximity, an appropriate radius for the circle can be calculated. Using the radius and the initial list of avatars the circular F-formation can be generated. As avatars leave the conversation the remaining structure does not change, but when new avatars wish to enter a conversation the formation is restructured. Optionally a prop can be included in the center (p-space) or boundary (o-space) of the conversation, which attracts the avatars attention. If the player gets close enough they join the conversation and the attention of the avatars is directed to the player.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved