Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

AUGMENTED REALITY TODAY, Essays (university) of Computer Science

Describes the augmented reality and its features

Typology: Essays (university)

2020/2021
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 02/25/2021

rithika-prasad
rithika-prasad 🇮🇳

5

(1)

1 document

1 / 37

Toggle sidebar
Discount

On special offer

Related documents


Partial preview of the text

Download AUGMENTED REALITY TODAY and more Essays (university) Computer Science in PDF only on Docsity! AUGMENTED REALITY A TECHNICAL SEMINAR REPORT ON AUGMENTED REALITY Submitted in partial fulfillment of requirement for the award of the degree of BACHELOR OF TECHNOLOGY In COMPUTER SCIENCE AND ENGINEERING Submitted by KANDURI SUNDARA KRISHNA YETHIRAJAN 17BD1A050R DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY (Approved by AICTE, Affiliated to JNTUH) Narayanaguda, Hyderabad, Telangana-29 2020-21 KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 1 AUGMENTED REALITY KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY (Approved by AICTE, Affiliated to JNTUH) Narayanaguda, Hyderabad. CERTIFICATE This is to certified that seminar work entitled “Artificial Neural Networks” is a bonafide work carried out in the seventh semester by “NAME ROLL NO” in partial fulfilment for the award of Bachelor of Technology in “COMPUTER SCIENCE & ENGINEERING ” from JNTU Hyderabad during the academic year 2020 - 2021 and no part of this work has been submitted earlier for the award of any degree. SEMINAR CO-ORDINATOR HEAD OF THE DEPARTMENT KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 2 AUGMENTED REALITY 2.LIST OF FIGURES FIGURE PAGE NO 1.Optical see through display 10 2. Video see through display 11 3. Projection-based AR can build you a castle in air 21 4. Outlining air 22 5. Virtual Fetus 25 6. Mockup of breast tumor biopsy 26 7. Virtual planes show the movement of a robotic arm 28 KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 5 AUGMENTED REALITY 3.LIST OF TABLES TABLE PAGE NO Comparison of requirements of Augmented Reality 9 and Virtual Reality KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 6 AUGMENTED REALITY 4. INTRODUCTION Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perception. Most AR research focuses on see- through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes Augmented Reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one. Real Environment Augmented Reality Augmented Virtuality Virtual Environment KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 7 AUGMENTED REALITY 5.HISTORY Although augmented reality may seem like the stuff of science fiction, researchers have been building prototype system for more than three decades. The first was developed in the 1960s by computer graphics pioneer Ivan Surtherland and his students at Harvard University. In the 1970s and 1980s a small number of researchers studied augmented reality at institution such as the U.S. Air Force's Armstrong Laboratory, the NASA Ames Research Center and the university of North Carolina at Chapel Hill It wasn't until the early 1990s that the term "Augmented Reality”was coined by scientists at Boeing who were developing an experimental AR system to help workers assemble wiring harnesses. In 1996 developers at Columbia University develop 'The Touring Machine' In 2001 MIT came up with a very compact AR system known as "MIThriir. Presently research is being done in developing BARS (Battlefield Augmented Reality Systems) by engineers at Naval Research Laboratory, Washington D.C. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 10 AUGMENTED REALITY 6. WORKING AR system tracks the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software can place a three dimensional image of a tea cup, for example on top of a real saucer and keep the virtual cup fixed in that position as the user moves about the room. AR systems employ some of the same hardware technologies used in virtual reality research, but there's a crucial differences: whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplement it. Augmented Reality is still in an early stage of research and development at various universities and high-tech companies. Eventually, possible by the end of this decade, we will see first mass- marketed augmented reality system, which one researcher calls "The Walkman of the 2 Is1 century". What augmented reality' attempts to do is not only super impose graphics over a real environment in real time, but also change those graphics to accommodate a user's head- and eye-movements, so that the graphics always fit and perspective. Here are the three components needed to make an augmented-reality system work: a.i Head-mounted display KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 11 AUGMENTED REALITY a.ii Tracking system a.iii Mobile computing power 1.aHead-Mounted Display Just as monitor allow us to see text and graphics generated by computers, head- mounted displays (HMD's) will enable us to view graphics and text created by augmented-reality systems. There are two basic types of HMD's a.i Video see-through a.ii Optical see-through fig 1 Optical see-through display A simple approach to optical see-through display employs a mirror beam splitter- a half silvered mirror that both reflects and transmits light. If properly oriented in front of the user's eye, the beam splitter can reflect the image of a computer display into the user's line of sight yet still allow light from the surrounding world to pass through. Such beam splitters, which are called combiners, have long been used in head up displays for fighter-jet- pilots (and, KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 12 AUGMENTED REALITY graphics, with everything focusing at the same apparent distance. At present, a video camera and display is no match for the human eye. An optical approach has the following advantages over a video approach 1 Simplicity': Optical blending is simpler and cheaper than video blending. Optical approaches have only one "stream" of video to worry about: the graphic images. The real world is seen directly through the combiners, and that time delay is generally a few nanoseconds. Video blending, on the other hand, must deal with separate video streams for the real and virtual images. The two streams of real and virtual images must be properly synchronized or temporal distortion results. Also, optical see through HMD's with narrow field of view combiners offer views of the real world that have little distortion. Video cameras almost always have some amount of distortion that must be compensated for, along with any distortion from the optics in front of the display devices. Since video requires cameras and combiners that optical approaches do not need, video will probably be more expensive and complicated to build than optical based systems. 2 Resolution: Video blending limits the resolution of what the user sees, both real and virtual, to the resolution of the display devices. With current displays, this resolution is far less than the resolving power of the fovea. Optical see- through also shows the graphic images at the resolution of the display devices, but the user's view of the real world is not degraded. Thus, video reduces the resolution of the real world, while optical see-through docs not. 3. Safety: Video see-through HMD's arc essentially modified closed-view HMD's, If the power is cut off, the user is effectively blind. This is a safety concern in some applications. In contrast, when power is removed from an optical see- KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 15 AUGMENTED REALITY through HMD, the user still has a direct view of the real world. The HMD then becomes a pair of heavy sunglasses, but the user can still see. 4. No eye offset: With video see-through, the user's view of the real world is provided by the video cameras. In essence, this puts his "eyes" where the video cameras are not located exactly where the user's eyes are, creating an offset between the cameras and the real eyes. The distance separating the cameras may also not be exactly the same as the user's interpupillary distance (IPD). This difference between camera locations and eye locations introduces displacements from what the user sees compared to what he expects to see. For example, if the cameras are above the user's eyes, he will see the world from a vantage point slightly taller than he is used to. Video blending offers the following advantages over optical blending 1 Flexibility in composition strategies: A basic problem with optical see- through is that the virtual objects do not completely obscure the real world objects, because the optical combiners allow light from both virtual and real sources. Building an optical see-through HMD that can selectively shut out the light from the real world is difficult. Any filter that would selectively block out light must be placed in the optical path at a point where the image is in focus, which obviously cannot be the user's eye. Therefore, the optical system must have two places where the image is in focus: at the user's eye and the point of the hypothetical filter. This makes the optical design much more difficult and complex. No existing optical see-through HMD blocks incoming light in this fashion. Thus, the virtual objects appear Ghost- like and semi- transparent. This damages the illusion of reality because occlusion is one of the strongest depth cues. In contrast, video see-through is far more flexible about how it merges the real and virtual images. Since both the real and virtual are available in digital form, video see-through compositors can, on a pixel-by-pixel basis, take the real, or the virtual, or some blend between the two to simulate transparency. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 16 AUGMENTED REALITY 2. Wide field-of-vie w: Distortions in optical systems are a function of the radial distance away from the optical axis. The further one looks away from the center of the view, the larger the distortions get. A digitized image taken through a distorted optical system can be undistorted by applying image processing techniques to unwrap the image, provided that the optical distortion is wellcharacterized. This requires significant amount of computation, but this constraint will be less important in the fut ure as computers become faster. It is harder to build wide field- of-view displays with optical see-through techniques. Any distortions of the user's view of the real world must be corrected optically, rather than digitally, because the system has no digitized image of the real world to manipulate. Complex optics is expensive and add weight to the HMD. Wide field-of-view systems are an exception to the general trend of optical approaches being simpler and cheaper than video approaches. 3. Real and virtual view delays can be matched: Video offers an approach for reducing or avoiding problems caused by temporal mismatches between the real and virtual images. Optical see-through HMD's offer an almost instantaneous view of the real world but a delayed view of the virtual. This temporal mismatch can cause problems. With video approaches, it is possible to delay the video of the real world to match the delay from the virtual image stream. 4. Additional registration strategies: In optical see-through, the only information the system has about the user's head location comes from the head tracker. Video blending provides another source of information: the digitized image of the real scene. This digitized image means that video approaches can employ additional registration strategies unavailable to optical approaches. 5. Easier to match the brightness of the real and virtual objects: Both optical and video technologies have their roles, and the choice of technology depends upon the application requirements. Many of the mismatch assembly and repair prototypes use optical approaches, possibly because of the cost and safety issues. If successful, the equipment would have to be replicated in large KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 17 AUGMENTED REALITY User can get better result with a technique known as differential GPS. In this method, the mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed location on the earth. This transmitter broadcasts the correction based on the difference between the stationary GPS antenna's known and computed positions. By using these signals to correct the satellite signals, the differential GPS can reduce the margin of error to less than one meter. The system is able to achieve the centimeter-level accuracy by employing the real-time kinematics GPS, a more sophisticated form of differential GPS that also compares the phases of the signals at the fixed and mobile receivers. Trimble Navigation reports that they have increased the precision of their global positioning system (GPS) by replacing local reference stations with what they term a Virtual Reference Station (VRS), This new VRS will enable users to obtain a centimeter- level positioning without local reference stations; it can achieve long-range, real- time kinematics (RTK) precision over greater distances via wireless communications wherever they are located. Real-time kinematics technique is a way to use GPS measurements to generate positioning within one to two centimeters (0,39 to 0.79 inches). RTK is often used as the key component in navigational system or automatic machine guidance. Unfortunately, GPS is not the ultimate answer to position tracking. The satellite signals are relatively weak and easily blocked by buildings or even foliage. This rule out useful tracking indoors or in places likes midtown Manhattan, where rows of tall building block most of the sky. GPS tracking works well in wide open spaces and relatively low buildings. 1.b Mobile Computing Power For a wearable augmented realty system, there is still not enough computing power to create stereo 3-D graphics. So researchers are using whatever they can get out of laptops and persona! Computers, for now. Laptops KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 20 AUGMENTED REALITY are just now starting to be equipped with graphics processing unit (CPU's), Toshiba just now added a NVIDIA to their notebooks that is able to process more than 17-miilion triangles per second and 286- miIlion pixels per second, which can enable CPU-intensive programs, such as 3D games. But still notebooks lag far behind-NVID1A has developed a custom 300-MHz 3-D graphics processor for Microsoft's Xbox game console that can produce 150 million polygon per second—and polygons are more complicated than triangles. So you can see how far mobiles graphics chips have to go before they can create smooth graphics like the o nes you see on your home video-game system. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 21 AUGMENTED REALITY 7. TYPES OF AUGMENTED REALITY 4.1 Projection based AR Just like anything else which is beyond our reach, projection based AR feels more attractive (at least as of now) compared to an AR app you can install on your phone. As is obvious by its name, projection based AR functions using projection onto objects. What makes it interesting is the wide array of possibilities. One of the simplest is projection of light on a surface. Speaking of lights, surfaces and AR, did you ever think those lines on your fingers (which divide each finger into three parts) can create 12 buttons? Have a look at the image and you would quickly grasp what we’re talking about. The picture depicts one of the simplest uses of projection based AR where light is fired onto a surface and the interaction is done by touching the projected surface with hand. The detection of where the user has touched the surface is done by differentiating between an expected (or known) projection image and the projection altered by interference of user’s hand. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 22 AUGMENTED REALITY 8. APPLICATIONS Only recently have the capabilities of real-time video image processing, computer graphics systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with a view of the 3D environment surrounding the user. Researchers working with the AR system have proposed them as solutions in many domains. The areas have been discussed range from entertainment to military training. Many of the domains, such as medical are also proposed for traditional virtual reality systems. This section will highlight some of the proposed application for augmented reality. 5.1 Medical Because imaging technology is so pervasive throughout the medical field, it is not surprising that this domain is viewed as one of the more important for augmented reality systems. Most of the medical application deal with image guided surgery. Pre-operative imaging studies such as CT or MR] scans, of the KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 25 AUGMENTED REALITY patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating the 3D mode l from the multiple views and slices in the preoperative study. This is most often done mentally though some systems will create 3D volume visualization from the image study. AR can be applied so that the surgical team can see the CT or MR1 data correctly registered on the patient in the operating theater while the procedure is progressing. Being able to accurately register the images at this point will enhance the performance of the surgical team. Another application for AR in the medical domain is in ultra sound imaging. Using an optical see-through display the ultrasound technician can view a volumetric rendered image of the fetus overlaid on the abdomen of the pregnant woman. The image appears as if it were inside of the abdomen and is correctly rendered as the user moves. Fig 5: Virtual fetus inside womb of pregnant patient. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 26 AUGMENTED REALITY Fig 6: Mockup of breast tumor biopsy. 3-D graphics guide needle insertion. 5.2 Entertainment A simple form of the augmented reality has been in use in the entertainment and news business for quite some time. Whenever you are watching the evening weather report the weather reporter is shown standing in the front of changing weather maps. In the studio the reporter is standing in front of a blue or a green screen. This real image is augmented with the computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating. Movie special effects make use of digital computing to create illusions. Strictly speaking with current technology this may not be considered augmented reality because it is not generated in the real-time. Most special effects are created off- line, frame by frame with a substantial amount of user interaction and computer graphics system rendering. But some work is progressing in computer analysis of the live action images to determine the camera parameters and use this to drive the generation of the virtual graphics objects to be merged. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 27 AUGMENTED REALITY 5.6 Consumer design Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like. It is conceivable that a future system would allow you to bring a video tape of your house shot from various viewpoints in your backyard and in real time it would augment that view to show the new deck in its finished form attached to your house. Or bring in a tape of your current kitchen and the augmented reality processor would replace your current kitchen cabinetry with virtual images of the new kitchen that you are designing. 9. ADVANTAGES AND ITS LIMITATIONS Advantages  Augmented Reality is set to revolutionize the mobile user experience as did gesture and touch (multi- modal interaction) in mobile phones. This will redefine the mobile user experience for the next generation making mobile search invisible and reduce search effort for users.  Augmented Reality, like multi- modal interaction (gestural interfaces) has a long history of usability research, analysis and experimentation and therefore has a solid history as an interface technique.  Augmented Reality improves mobile usability by acting as the interface itself, requiring little interaction. Imagine turning on your phone or KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 30 AUGMENTED REALITY pressing a button where the space, people, objects around you are “sensed” by your mobile device- giving you location based or context sensitive information on the fly. Limitations  Current performance levels (speed) on today’s [2009] iPhone or similar touch devices like the Google G1 will take a few generations to make Augmented Reality feasible as a general interface technique accessible to the general public.  Content may obscure or narrow a user’s interests or tastes. For example, knowing where McDonald’s or Starbucks is in Paris or Rome might not interest users as much as “off the beaten track information” that you might seek out in travel experiences.  Privacy control will become a bigger issue than with today’s information saturation levels. Walking up to a stranger or a group of people might reveal status, thoughts (Tweets), or other information that usually comes with an introduction, might cause unwarranted breaches of privacy. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 31 AUGMENTED REALITY 10. FUTURE ENHANCEMENT This section identifiers areas and approaches that require further researches to produce improved AR systems. Hybrid approach Further tracking systems may be hybrids, because combining approaches can cover weaknesses. The same may be true for other problems in AR. For example, current registration strategies generally focus on a single strategy. Further systems may be more robust if several techniques are combined. An example is combining vision-based techniques with prediction. If the fiducially arc not available, the system switches to open-loop prediction to reduce the registration errors, rather than breaking down completely. The predicted viewpoints in turn produce a more accurate initial location estimate for the vision-based techniques. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 32 AUGMENTED REALITY Applications in medical visualization will take longer. Prototype visualization aids have been used on an experimental basis, but the stringent registration requirements and ramifications of mistakes will postpone common usage for many years. AR will probably be used for medical t raining before it is commonly used in surgery. The next generation of combat aircraft will have Helmet Mounted Sights with graphics registered to targets in the environment. These displays, combined with short-range steer able missiles that can shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights. Instead of having to be directly behind his target in order to shoot at it, a pilot can now shoot at anything within a 60-90 degree cone of his aircraft's forward centerline. Russia and Israel currently have systems with this capability, and the U.S is expected to field the AIM-9X missile with its associated Helmet-mounted sight in 2002. Augmented Reality is a relatively new field, where most of the research efforts have occurred in the past four years. Because of the numerous challenges and unexplored avenues in this area, AR will remain a vibrant area of research for at least the next several years. After the basic problems with AR are solved, the ultimate goal will be to generate virtual objects that are so realistic that they are virtually indistinguishable from the real environment. Photorealism has been demonstrated in feature films, but accomplishing this in an interactive application will be much harder. Lighting conditions, surface reflections, and other properties must be measured automatically ,in real time. KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 35 AUGMENTED REALITY 12. REFERENCES Wikipedia,http://en.wikipedia.org/wiki/ Augmented_reality Augmented reality: A practical guide. (2008) http://media.pragprog.com/titles/cfar/intro.pdf http://arcadia.eafit.edu.co/Publications/AugmentedReali tyIADATEnglish.pdf http://upcommons.upc.edu/eprints/bitstream/2117/9839/1/ IEM_number16_WN.2.pdf International Conference on EngineeringEducation & Research, Korea, (2009). http://robot.kut.ac.kr/papers/ DeveEduVirtual.pdf Jochim, S., Augmented Reality in Modern Education KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 36 AUGMENTED REALITY http://augmentedrealitydevelopmentlab.com/wpcontent/uploads/ 2010/08/ ARDLArticle8.5-11Small.pdf Blalock, J., Carringer, J.: Augmented Reality Applications for Environmental Designers. In E. Pearson & P. Bohman (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications, pp. 2757--2762. Chesapeake, VA: AACE. (2006). 15. Dix, J., Finlay, J., Abowd, D., Beale, R.:Human-Computer Interaction. Third Edition, Pearson: Prentice Hall Europe,(2004). 8.. Valenzuela, D., Shrivastava, P.: Interview as a Method for Qualitative Research .Presentation http://www.public.asu.edu/~kroel/www500/ Interview%20Fri.pdf Thomas, W.: A Review of Research onProject Based Learning. March, (2000).http://www.bobpearlman.org/ BestPractices/PBL_Research.pdf Shtereva, K., Ivanova, M., Raykov, P.: Project Based Learning KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY 37
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved