Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 2nd International Conference and Expo on Computer Graphics & Animation San Antonio, USA.

Day 1 :

  • Animation

Session Introduction

Tom Sito

The University of Southern California, USA

Title: Computer Animation at the Half-Century: How Did We Get Here?
Speaker
Biography:

Tom Sito has been a professional animator since 1975. One of the key players in Disney’s animation revival in the 1990s, he animated on such classic films as The Little Mermaid (1989), Beauty and the Beast (1991), and The Lion King (1994). He is Chair of The John C. Hench Division of Animation and Digital Arts at the School of Cinematic Arts at the University of Southern California. President Emeritus of the Animation Guild, Local 839, Hollywood. He is author of several books including Drawing the Line: The Untold Story of the Animation Unions from Bosko to Bart Simpson ( Univ Press of Kentucky, 2006), and Moving Innovation, a History of Computer Animation (MIT Press, 2013)

Abstract:

Fifty years ago a graduate student at MIT completed his thesis project by creating the first ever animation program on a declassified Cold War computer used to track Soviet nukes. In the intervening years Computer Graphics (or CG) has forever changed the way we experience media. Without CG the Titanic would not sink. The armies of Middle Earth could not march. We would never know Shrek, Lara Croft, Buzz Lightyear or the Naavi. It has made movie film itself an anachronism. Yet few today understand its origins. Ask seven professionals what was the first computer graphics in a major motion picture, and you will probably get seven different answers. There is more to the history of CG than one day George Lucas rubbed a lamp and Pixar popped out. Tom Sito, author of the first ever complete history of CG, describes how an unlikely cast of characters—math nerds, experimental artists, beatniks, test pilots, hippies, video gamers and entrepreneurs shared a common dream, to create art with a computer, heretofore considered only a machine for calculations. Together they created something no one asked for, and no one knew they wanted, and they used it to change all the world’s media.

Speaker
Biography:

Sean McComber is an Assistant Professor of Animation in Arts and Technology (ATEC) at the University of Texas at Dallas. He graduated from Savannah College of Art and Design with a B.F.A. in Computer Art and an emphasis in Animation and received his M.F.A in ATEC from UTD. After graduating, Sean was accepted into the internship program at Rhythm & Hues Studios, a visual effects production company for film. Sean rose from intern to Lead Animator and eventually traveled to Rhythm & Hues’ Mumbai, India, facility as Supervising Animator. Sean is currently teaching classes in Character Animation.

Eric Farrar is an Assistant Professor of 3D Computer Animation in Arts and Technology (ATEC). He graduated from The Ohio State University where he completed an MFA in Computer Animation and Visualization working through the Advanced Computing Center for Art and Design (ACCAD). Eric then went to work for the Los Angeles based visual-effects studio, Rhythm & Hues where he worked as a character rigger creating bone and muscle systems for digital characters for films such as Night at the Museum and The Chronicles of Narnia: The Lion, Witch and the Wardrobe. Eric is currently teaching classes in 3D animation including courses specifically focused on the more technical side of character rigging.

Todd Fechter is an Associate Professor of Animation and current Interim Director of the school of Arts, Technology and Emerging Communication. He graduated with an MFA in Computer Animation and Visualization from The Ohio State University in 2002. Fechter has worked in and around the animation industry for the past thirteen years, having worked as a modeler, rigger, and modeling supervisor for studios including DNA Productions and Reel FX. Fechter currently teaches courses in modeling and pre-production.

Abstract:

Preparing students for careers in the animation industry can be a challenge. Over the past three years we have developed an Animation Production Studio course in which we strive to mimic a studio production environment. In this course students have the opportunity to drive the entire production pipeline including story development, layout, modeling, texturing, rigging, animation, lighting, rendering/compositing, and sound design, as well as project planning and management. Students work in a collaborative environment and develop skills with specific production tasks in addition to gaining critical experience in working as part of a large, multi-disciplinary team with definite production goals and deadlines. The problem solving and time management skills developed in this course help prepare our students not only for the film and game industries, but also for the myriad new and emerging areas of animation and visualization. This lecture will discuss the structure of the course, what has and has not worked over the past three years, and how the evolution of this course has helped to prepare students for work after college, drive the growth and direction of the ATEC animation program, and create several award winning short films.

Speaker
Biography:

Abdennour El Rhalibi is Professor of Entertainment Computing and Head of Strategic Projects at Liverpool John Moores University. He is Head of Computer Games Research Lab at the Protect Research Centre. He has over 22 years' experience doing research and teaching in Computer Sciences. Abdennour has worked as lead researcher in three EU projects in France and in UK. His current research involves Game Technologies and Applied Artificial intelligence. Abdennour has been leading for six years several projects in Entertainment Computing funded by the BBC and UK based games companies, involving cross-platform development tools for games, 3D Web-Based Game Middleware Development, State Synchronisation in Multiplayer Online Games, Peer-to-Peer MMOG and 3D Character Animation. Abdennour has published over 150 publications in these areas. Abdennour serves in many journal editorial boards including ACM Computer in Entertainment and the International Journal of Computer Games Technologies. He has served as chair and IPC member in over 100 conferences on Computer Entertainment, AI and VR. Abdennour is member of many International Research Committees in AI and Entertainment Computing, including IEEE MMTC IG: 3D Rendering, Processing and Communications (3DRPCIG), IEEE Task Force on Computational Intelligence in Video Games and IFIP WG 14.4 Games and Entertainment Computing.

Abstract:

In this talk, Prof. Abdennour El Rhalibi will present an overview of his research in game technologies at LJMU. He will present some recent projects developed with BBC R&D, on game middleware development and in facial animation. In particular he will introduce a novel framework for coarticulation and speech synchronization for MPEG-4 based facial animation. The system, known as Charisma, enables the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework also enables real-time model simplification using quadric-based surfaces. The coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro\'s model of coarticulation adapted to MPEG-4 facial animation (FA) specification. He will also discuss some experiments which show that the coarticulation technique gives overall good results when compared to related state-of-the-art techniques.

Lauren Carr

Montclair State University, USA

Title: Connecting 3D Animation and Fine Art
Speaker
Biography:

Lauren Carr joins the Department of Art and Design as an Assistant Professor in the Animation/Illustration program. She has worked professionally for Disney Feature Animation, Cinesite, Sony Pictures Imageworks, and Dreamworks Animation. Some of her film projects include Tangled, Meet the Robinsons, Chicken Little, X-Men United, Rio, and Ice Age 4. Prof. Carr was a character simulation technical director at Blue Sky Studios and, prior to coming to Montclair State University, had taught for the School of Visual Arts in the Department of Computer Art, Computer Animation & Visual Effects.

Abstract:

3D Animation” is commonly associated with characters animating; yet this arguably young medium can create aesthetic expression far beyond characters animated to tell a story. Exploiting 3D animation software by utilizing its tools with traditional art forms and media for experimental art is seldom considered, despite its powerful potential. At the intersection of fine art and 3D animation, students can yield undiscovered creative techniques and problem-solving. There is the powerful potential of discovery in the art academy academia by connecting these two learning paths and likely innovative curriculum solutions resulting in communal learning and discovery amongst students and professors. This session seeks to explore an interdisciplinary approach combining 3D animation software with traditional art media. This conference talk explores theorizing and implementation of methods that combine fine art and 3D animation studies. Presentation embodies the analyses of personal practices of implementing traditional media with 3Danimation software.

David M Breaux

Sr Character / Creature Animator & Animation Instructor, USA

Title: Facial Animation - Phoneme’s be gone…
Biography:

I have completed work on over 30+ projects and counting, during my time in the industry. I have a combined 16+ years of experience in Film, Games, Commercials and Television. I specialize in highly detailed character and creature performance animation, using both key framed and motion captured data or a hybrid of the two where appropriate. My professional film experience includes a diverse list of animal, character and creature types encompassing the fantastic, the realistic and the cartoony. My most recent released projects were for Blur Studios on Tom Clancy’s: The Division pre-release trailer and Halo: The Master Chief Collection cinematic remastering.

Abstract:

Any serious animator worth their weight in frames has seen Preston Blair’s mouth expressions or heard of using phoneme’s for animating a characters lip sync… In it’s day this was quite an effective way for animators to break down dialogue into something manageable. The problem is hand drawn animation has never needed to been interested in recreating perfectly believable lip-sync after all the starting point of traditional hand drawn animation is already several steps away from realism. This thinking however has carried over into CG animation in a couple of ways… Often character rigs will have predefined shapes for a character which rightful so can be art directed which is often a desired trait especially if there is a large animation team or a specific thing a character is known for. However these confine you to that shape… And create more work for riggers an modelers. Animators also loose a bit of control by the nature of this system. This system is also used often in games to automate facial animation since they often have a lot more dialogue to address than most feature films….However it produces over chattery results hurting the visuals and even kicking the player out of their suspension of disbelief. I am proposing a different method which now that CG offers us the ability for better or worse to infinitely tweak our animation to achieve the most subtle of motion. This is a technique I’ve developed over my 16+ year animating characters and creatures who needed to speak dialogue and it involves a deeper understanding how humans speak, what our mouths are capable of doing muscular wise, and how we perceive what someone is saying in a visual sense. It also takes some burden off the modelers and riggers, and simplifies controls for animators while increasing the control it affords them. I didn’t invent this, nature did…I’ve just refined how I think about it and distilled it down into a description that I’ve never heard explained this way. My students are very receptive to this approach and often find it takes the mystery out of effective lip-sync making it easier and faster to produce than they thought. Performance and LipSync are my favorite things to work on as an animator.

Speaker
Biography:

Ben is a graduate of Ringling College of Art + Design’s renowned Computer Animation program as well as Texas A&M University’s College of Architecture. Ben also spent a year at Carnegie Mellon’s Entertainment Technology Center. Ben moved his family to Iowa in 2011 to help create the computer animation program at South-eastern Community College in West Burlington, IA. While there, he guided two animation teams in the production of their award-winning shorts at the national Business Professionals of America animation competition last year. Ben shares with students the knowledge and skills he continually gains from his own experiences in the animation industry. Prior to teaching, Ben worked as a character animator at Reel FX in Dallas, TX on Sony’s “Open Season 3”. While at Reel FX, Ben also did clean-up work on Open Season 3, Looney Shorts, Webosaurs, and DC Universe as well as managed the Render Farm at night.

Abstract:

This presentation will address the incorporation of new methods, technologies, and tools for a more accessible and streamlined system to train the next generation of 3D artists. It will compare and contrast traditional tools and methods with new and emerging ones as well as highlight the pros and cons of each. It will also demonstrate why these changes are not only necessary, but will become mandatory in the future. Virtual Instruction can be defined simply as instruction given through a live online video feed without the instructor being physically present, or in some cases, without the student being physically present. While Virtual Instruction is not new to education, there are new concepts being introduced to make Virtual Instruction even more accessible, more affordable, and of an even higher quality. The proposed Virtual Instruction model will open a discussion about the challenges of companies hiring well-trained employees with less student loan baggage, the challenges of schools attracting qualified industry professionals to teach animation courses at their campuses, and the challenges of students striking a balance between quality and affordability in animation programs. These challenges make for a very promising environment to implement the next phase of Virtual Instruction. The idea of implementing the Virtual Instruction model across time-zones will also be discussed. This presentation will have several examples of instructional tools developed by the presenter, including personal and student projects. These examples will give compelling evidence of the effectiveness of the Virtual Instruction model, which is the goal of the presentation.

Speaker
Biography:

Russell Pensyl (MFA 88, BFA 85) is an American media artist and designer. His work maintains a strategic focus on communication, narrative, and user centric design processes for interactive and communication media. Pensyl is currently full Professor in the Department of Art+Design at Northeastern University. He held the post of Chair from 2010 till 2012. Previous post include, Director of Research and Graduate Studies at Alberta College of Art + Design, Director of the Interaction and Entertainment Research Center, Executive Vice Dean of the School of Art, Design and Media at Nanyang Technological University in Singapore, Chair of the Department of Digital Art and Design at Peking University. Pensyl’s current work includes the creation of location based entertainment several areas of technology in the application of content delivery in environmental spaces including facial recognition, positioning and localization, gesture recognition. Recently, research in the use of facial recognition technology, positioning and augmented reality annotation is resulting in commercially viable communication technologies as well as user centric, autonomously responsive systems using biometric data in interactive installations. In 2010, recent work explores the “subtle presence” autonomously responsive media in an interactive installation that presents a dynamic time lapse still-life painting that shifts subtly, caused by sensing personal characteristics of the viewer in the exhibition space. In 2011, this installation was featured in the International Sarajevo Winter Festival. In 2008 Pensyl’s mixed reality installation “The Long Bar” was a Curator Invited Installation into the SIGGRAPH Asia Synthesis – Curated Show/Art Gallery in Singapore. His exhibition credits include international exhibitions in China, USA, Japan, and Europe

Abstract:

Facial recognition technology is a growing area of interest, where researchers are using these new application for study in psychology, marketing and product testing and other areas. There are also application where the use of facial image capture and analysis can be used to create new methods for control, mediation and integration of personalized information into web based, mobile apps and standalone system for media content interaction. Our work explores the application of facial recognition with emotion detection, to create experiences within these domains. For mobile media applications, personalized experiences can be layered personal communication. Our current software implementation can detect smiles, sadness, frowns, disgust confusion, and anger. In a mobile media environment, content on a device can be altered, to create a fun, interactive experience, which is personally responsive and intelligent. By intersecting via direct communication between peer to peer mobile apps, moods can be instantly conveyed to friends and family – when desired by the individual. This creates a more personalized social media experience. Connections can be created with varying levels of intimacy, from family members, to close friends, out to acquaintances and further to broader groups as well. This technique currently uses an pattern recognition to identify shapes within a image field using Viola and Jones Open CV Haar-like features application [1], [2],[3] and a “feret” database [4] of facial image and support vector machine (LibSVM) [3] to classify the capture of the camera view field and identify if a face exists. The system processes the detected faces using an elastic bunch graph mapping technique that is trained to determine facial expressions. These facial expressions are graphed on a sliding scale to match the distance from a target emotion graph, thus giving an approximate determination of the user’s mood.

Speaker
Biography:

Jennifer Coleman Dowling is an experienced new media specialist, designer, educator, author, and artist. She holds an M.F.A. in Visual Design from the University of Massachusetts Dartmouth and a B.A. in Studio Art from the University of New Hampshire. Dowling is a Professor in the Communication Arts Department at Framingham State University in MA focusing on Integrated Digital Media. She has been dedicated to her teaching and professional work for over 25 years, and is the author of “Multimedia Demystified” published by McGraw-Hill. Her current line of research and practice is analog-digital approaches pertaining to media, fine art, and design.

Abstract:

Teaching computer animation techniques using innovative approaches was made possible for me with two consecutive technology grants. iPads were procured for inventive ways to learn digital animation and time-based media for artistic and commercial purposes. It assisted the students in the development of new visualization and production methods while concurrently providing the theoretical and practical instruction of fundamental animation techniques. This approach facilitated a more imaginative process for solving problems, discovering inspiration, creating concepts, and exchanging ideas so they could more fully develop their knowledge of the subject while building more versatile computer animation capabilities. Another advantage included the portability, accessibility, flexibility, and immediacy of using a mobile device as the primary course tool. Students used the iPad to sketch ideas, brainstorm, plan narrative and storytelling structures, conduct research, and present their work. They also had ongoing opportunities for collaborating with one another on group projects, exchanging ideas, discussing work, and giving/receiving feedback. Complementary tactics with iPads included: studying historical and contemporary figures in the animation field; sketching characters, scenes, and storyboards; manipulating timeline key frames and stage elements, and adjusting camera views; digitizing and editing audio tracks; and capturing and manipulating photography and video. Assignments focused on such subjects as kinetic typography, logo animation, introductory sequences for video and film, web-based advertisements, cartoon and character animation, animated flipbooks, and stop-motion techniques. This presentation will cover the goals and outcomes of this research, including student survey results, assessments, and animation examples.

Speaker
Biography:

Professor David Xu is tenure associate professor at Regent University, specializing in computer 3D animation and movie special effects. He got MFA Computer Graphics in 3D Animation from Pratt Institute in NY. He has served as a senior 3D animator in Sega, Japan; a senior CG special effector in Pacific Digital Image Inc., Hollywood; and as a professor of animation in several colleges and universities where he developed the 3D animation program and curriculum. He has been a committee member of the computer graphics organization SIGGRAPH where he was recognized with an award for his work. He published the book Mastering Maya: The Special Effects Handbook invited by Shanghai People's Fine Arts Publishing House.

Abstract:

In this talk, Professor Xu will present an overview of the Maya Special Effects used for the post-productions. He will showcase some Maya Special Effects used for films, and share his thoughts on the roles of Maya special effects in movies and commercials. In particular, he will go depth into the Explosion Effect and the Splash Effect which he created for his published textbook, where the conceptualization, production process and effective solution to the animation projects will be explored. He will also demonstrate various Maya Special Effects techniques, for examples, how to create the bomb using the Particles Instancer; how to create the explosion and fire effects by applying the Dynamic Relationship Editors and Particle Collision Event Editor, Gravity and the Radial; how to create the ocean surface by applying the Soft Body; how to create the ocean splash effects by applying the Rigid Body, Particle System, Particle Collision Event Editor and Gravity.

Will Kim

Riverside City College, USA

Title: Animation as Art in Motion
Speaker
Biography:

Will Kim is a Tenured Associate Professor of Art at Riverside City College and a Los Angeles based artist. Kim is a founder and director of RCC Animation Showcase. Kim received an M.F.A. in Animation from UCLA and a BFA in Character Animation from CalArts (California Institute of the Arts). Before teaching at RCC, he also taught at CalArts, Community Arts Partnership and Sitka Fine Arts Camp as a media art instructor. Kim’s work has showed in over 100 international film/animation festivals and auditoriums including Directors Guild of America (DGA) Theater, Academy of TV Arts and Sciences Theater, The Getty Center, The USC Arts and Humanities Initiative, and Museum of Photographic Arts San Diego. As an animation supervisor and a lead animator, Will has participated in various feature and short live action films that were officially selected for the New York Times’ Critic’s Pick, the United Nations’ climate change conference (Framework Convention), Los Angeles Film Festival, Tribeca Film Festival, and Cannes.

Abstract:

Animation is a form of fine arts. What’s more important than emphasizing what software to create characters’ movements is how well one can communicate his or her ideas and tell stories with honesty. In animation, the technology keeps changing all the time while fundamentals of drawing, painting, basic design, and animation principles never change. In pursuing Traditional Animation, there are digital compositing, special effects, and digital editing methods involved. In Pursuing 3D or Digital-2D animation, there are visualization and conceptualization that are often done by drawing or painting mediums. This lecture will discuss an animation filmmaking teaching and learning method that embraces originality and creative freedom in telling stories and expressing themselves while the students receive extensive opportunities to study digital animation techniques combined with traditional or/and experimental animation media.

Inma Carpe, Ed Hooks, Susana Rams

The Animation Workshop/VIA University College, Denmark and University Politechnic of Valencia, Spain.

Title: Animation & Neurocinematics*: The visible language of E-motion-S and its magical science.
Speaker
Biography:

Inma Carpe works as visual development artist/ animator and teacher at the ALL, The Animation workshop in Denmark. She gives workshops and collaborate with other countries developing educational curriculums and studying Animation and affective neurosciences for self-development and communication, focusing on emotions and mindfulness based on productions. She eventually works at film festivals in Hollywood as production assistant. Her personal work in animation reflects an interest in collage, blending animation with fashion illustration, sciences and education. Her specialty in preproduction brought her to live in different countries working for short formats and independent studios.

Abstract:

We love movies because we like to jump from our “reality” to live a dream, a parallel universe that inspires us. We lodge for adventure, love, excitement, answers to quests. That’s the magic of cinema, make you believe what you see and over all, FEEL it. As Antonio Damasio said- “we´re feeling machines that think”. Such feelings come from the interpretation of the emotions in our bodies. Emotions are our universal language, the motivation of living, the fuel, the key of what makes a movie successful and truly an art piece that you will remember because It moves you, the secret, empathy. Animation, indeed, is a social emotional learning media, which goes beyond the limitations of live action movies due to the diversity of techniques and its visual plasticity capable to construct the impossible. Animators are not real actors but more like the midwife who brings the anima into aliveness and that requires knowing how emotions work. Ed Hooks as an expert in training animators and actors always remarks:-“emotion tends to lead action”-animators must understand this as well as the connections between thinking, emotions and physical action. I would like to expose how integrating Hooks advices and the emerging results of scientists like Talma Hendler, Gal Raz or Paul Ekman who study the science behind the scenes: the magic of Neurocinematics (Uri Hasson); can help any professional from the industry to be more aware of our performances and enhance the cinematic experience. Animation is a visual thinking and feeling media, which offers a promising unlimited arena, to explore and practice emotional intelligence and keep us interested in living fully aware and feeling new realities by loving, creating meaningful movies.

*Neurocinematics( Hasson) the neuroscience of cinema. Such studies reveal which brain areas and emotions related are affected when watching movies.

Benjamin Kenwright

Edinburgh Napier University, United Kingdom

Title: Character Animation using Genetic Algorithms
Biography:

Dr. Benjamin Kenwright is the part of the games technology group at Edinburgh Napier University. He studied at Liverpool and Newcastle University before moving on to work in the game industry and eventually joining the Department of Computing at Edinburgh Napier University. His research interests include real-time systems, evolutionary computation, and interactive animation. He is also interested in the area of physics-based simulations and massively parallel computing.

Abstract:

The emergence of evolving search techniques (e.g., genetic algorithms) has paved the way for innovative character animation solutions. For example, generating human movements `without' key-frame data. Instead character animations can be created using biologically inspired algorithms in conjunction with physics-based systems. While the development of highly parallel processors, such as the graphical processing unit (GPU), has opened the door to performance accelerated techniques allowing us to solve complex physical simulations in reasonable time frames. The combined acceleration techniques in conjunction with sophisticated planning and control methodologies enable us to synthesize ever more realistic characters that go beyond pre-recorded ragdolls towards more self-driven problem solving avatars. While traditional data-driven applications of physics within interactive environments have largely been confined to producing puppets and rocks, we explore a constrained autonomous procedural approach. The core difficulty is that simulating an animated character is easy, while controlling one is difficult. Since the control problem is not confined to human type models, e.g., creatures with multiple legs, such as dogs and spiders, ideally there would be a way of producing motions for arbitrary physically simulated agents. This presentation focuses on evolutionary algorithms (i.e., genetic algorithms), compared to the traditional data-driven approach. We explain how generic evolutionary techniques are able to emulate physically-plausible and life-like animations for a wide range of articulated creatures in dynamic environments. We help explain the computational bottlenecks of evolutionary algorithms and possible solutions, such as, exploiting massively parallel computational environments (i.e., graphical processing unit (GPU)).

Speaker
Biography:

Daniel Boulos completed his Masters in Educational Technology at the University of Hawaii and his Bachelor of Fine Arts at California Institue of the Arts. Mr. Boulos has worked professionally as an animator for Walt Disney Studios, DreamWorks Animation and Warner Brothers Feature Animation. He has been teaching animation in higher education for 20 years and recently completed his animated film, “The Magnificent Mr. Chim”. He is a lifetime member of the Animation Guild and a member of ASIFA (Association Internationale du Film d’Animation). He has presented and been published in the United States and abroad. His animation work appears in more than ten feature animated films and numerous commecials and animated shorts. He is currently writing a comprehensive book on animation processes

Abstract:

Stylization is at the heart of 2D animation design and is only recently being more fully explored in 3D animated films. In the early days of 3D animation the push for realism in lighting, rendering and deformations displaced a pursuit of stylization in the quest to expand the capabilities of computer graphics technology. With those technical problems solved, 3D animation has more recently embraced stylization in design and character movement. Stylization also can be interpreted by some as playfulness and, “play is at the heart of animation” (Powers, p52, 2012). Nature can be seen as an “abstract visual phenomenon” (Beckman, Ezawa p101, 2012) and the portrayal of hyper realistic human characters in 3D animation can lead to the alienation of an audience, as they may not accept them as being real (Kaba 2012). It is the ability of animation to “break with naturalistic representation and visual realism” (Ehrlich, 2011) that is observed as one of the strengths of the art. This paper discusses the implications of stylized design and its use in 3D animated films, while drawing important references to traditional hand-drawn animation stylization processes that pose a challenge to modern 3D animation studios.

Speaker
Biography:

Anandh Ramesh is an Honors graduate of 3D Animation and VFX from Vancouver Film School with a Masters in Computer Science (Computer Graphics) from The University of Texas at Arlington. He is the CEO of Voxel Works Pvt. Ltd, a premier Animation Training Institution in Chennai, India. He has published a course on 3D Stereoscopy for Digital Tutors, and has published papers in several National and International conferences. He is a recipient of the Duke of Edinburgh International Standard for Young People, the Bharat Excellence Award and the Rashtrya Ratan.

Abstract:

I propose a method where facial animation for characters can be derived as a result of reverse engineering from the final action on the storyboard to the thought train driving the action. For this process, we classify actions into conscious, subconscious and unconscious actions, and derive the lesser obvious subconscious and unconscious parts leading to the conscious action. We begin by analyzing the situation at hand, and how it applies to each character in it. Then we use the storyboards to understand the primary action of the character. Here we study the face of the character, i.e., his expression, and the body language, i.e., the line of action and the pose. Then we proceed to analyze the possible references to the past of the character that could drive the action. Here, we try to reason things he might have seen or heard and his own internal reasoning that lead to his interpretation of the situation and the consequent action. Finally we derive the inner monologue of the character that drives the action. Once we finish the reverse engineering from the action in the storyboard to the thoughts and emotions, we map the eye darts, blinks, eyebrow movement, leading actions and its required anticipations within the time frame stipulated by the storyboard. This method of reverse engineering-based animation results in greater cohesive acting throughout a film, and creates greater connect with the audiences.

Chang, Yen-Jung

Department of Graphic Art and Communication, National Taiwan Normal University

Title: A Framework of Humorous and Comic Effects on Narrative and Audiovisual Styles for Animation Comedy
Speaker
Biography:

Yen-Jung Chang was born in Taipei, Taiwan in 1972. He studied in the School of Film and Animation in Rochester Institute of Technology, USA. After graduation, he worked as an animator in Buffalo and Los Angeles. From 2006, Yen-Jung was granted a scholarship from the Ministry of Education, Republic of China (Taiwan) to study for the PhD degree in the School of Creative Media, RMIT University, Australia. He obtained the PhD in 2009 and went back to Taiwan to teach in Universities. He has accomplished four animated short films as a film director. He is also a researcher who focuses on the area of animation theories and practices. He has actively participated in the academic events and film festivals. Now Yen-Jung Chang is teaching in Department of Graphic Art and Communication, National Taiwan Normal University, Taipei, Taiwan.

Abstract:

Humorous and comic elements are essentially used to entertain the audiences and key to success in box office. However, little research was found to systematically illustrate the importance and effects of these elements due to the complexity of subjective judgment during the film production. Hence, this research aims to analyze the narrative and the audio-visual styles for promoting the effects of animation comedy and conclude into a framework. The elements and features for evaluating an animated film are formed based on the surveys of experts’ opinions from animation industry and academy. A consensus of experts’ opinions on weights and ratings are mathematically derived using fuzzy Delphi and AHP methodology. The result indicates that reversal, exaggeration and satire are regarded to be the most significant narrative features in an animated film. More specific to the application of audiovisual elements, characters’ acting, character design and the sound are perceived prominently important. Hence, based on the preliminary structure obtained by the survey, a framework of audiences’ reception on the humorous and comic effects of animated films is established. This framework illustrate the process that audiences percept and react the narrative and audiovisual elements of animation comedy. The testified observation and evaluation for this framework in theaters can be further studied.

Biography:

Omar Linares graduated with a major in Cultural Practices from Emily Carr University of Art and Design in Vancouver, Canada. His studies revolve around animation, documentary film, and international cinema. He will be joining the Masters in Film Studies at Concordia University in Montreal, in September of 2015.

Abstract:

Animation has become ubiquitous, from cartoons to special effects, from commercials to information visualization; nonetheless, its own definition is more elusive than ever. Digital imaging has blurred the line between what is animated and what is a reproduction of recorded movement, rendering previous definitions of frame-by-frame production and non-recorded movement seemingly obsolete. Moreover, digital automation has also contested the authorship of moving images. In this light, can animation be defined? Rather than defining animation by what it is not, as the illusion of motion that is not recorded, the author reviews constitutive traits common to all moving images, like intervallic projection; those absent from animation, like reconstitution of movement; those specific to animation, like artificial change in positions; and notions of the index and digital authorship to distinguish animation as a particular type of moving image.

These considerations are arranged in a set of criteria with which to define animation by what it is, positively. Additionally, while the emphasis is on digital moving images, these criteria, are applicable to analogue techniques of animation. Ultimately, the author’s examples point to a continuity with old techniques and definitions, a continuity that extends to moving image practices outside of either animation or cinema.

Biography:

Currently serving as an Instructor at the University of the Incarnate Word in San Antonio Texas. Prior to teaching, I have worked at various studios such as Sony Imageworks, Sony Computer Entertainment of America, Naughty Dog & Infinity Ward. Some of my professional projects include Green Lantern, The Amazing Spiderman, Uncharted 2, Uncharted 3 and The Last of Us. Most recently served as a senior animator at Sony Computer Entertainment of America in San Diego working on an AAA PS4 title.

Abstract:

Incoming freshman face a large reality check entering animation. Animation to most is fun, exciting and offers immersion in the entertainment world that is considered glamorous. Not only is the work challenging and time consuming, it requires intense attention to detail and constant practice and improvement. Instant gratification is not the norm. The comprehension of the numbers of individuals, talents and workload involved in the animation process is only the beginning of the learning curve. The animation industry seeks students not only technically savvy, but dedicated, patient and most importantly, able to work well with others, work hard, long hours and understand their role and responsibilities in the production. Knowing the principles of animation incorporates a strong foundation in the field, but being able to apply them to their animation is key, along with learning other technical aspects such as how & why to use the graph editor, timeline, weighted or unweighted tangents, or broken tangents. This presentation will outline the freshman animation course developed, along with various teaching techniques and tools with some preliminary outcomes and lessons learned.

Tien-Tsin Wong

Chinese University of Hong Kong, Hong Kong

Title: Computational Manga and Anime
Speaker
Biography:

Tien-Tsin Wong is known with his pioneer works in Computational Manga, Image-based Relighting, Ambient Occlusion (Dust Accumulation Simulation), Sphere Maps, and GPGPU for Evolutionary Computing. He graduated from the Chinese University of Hong Kong in 1992 with a B.Sc. degree in Computer Science. He obtained his M.Phil. and Ph.D. degrees in Computer Science from the same university in 1994 and 1998 respectively. He was with HKUST in 1998. In August 1999, he joined the Computer Science & Engineering Department of the Chinese University of Hong Kong. He is currently a Professor. He is also the director of Digital Visual Entertainment Laboratory at CUHK Shenzhen Research Institute (CUSZRI). He is an ACM Senior Member and a HKIE Fellow. He received the IEEE Transactions on Multimedia Prize Paper Award 2005 and the Young Researcher Award 2004. He was the Academic Committee of Microsoft Digital Cartoon and Animation Laboratory in Beijing Film Academy, visiting professor in both South China University of China and School of Computer Science and Technology at Tianjin University. He has actively involved (as Program Committee) in several international prestigious conferences, including SIGGRAPH Asia (2009, 2010, 2012, 2013), Eurographics (2007-2009, 2011), Pacific Graphics (2000-2005, 2007-2014), ACM I3D (2010-2013), ICCV 2009, and IEEE Virtual Reality 2011. His main research interests include computer graphics, computational manga, computational perception, precomputed lighting, image-based rendering, GPU techniques, medical visualization, multimedia compression, and computer vision.

Abstract:

Traditional manga (comic) and anime (cartoon) creation are painstaking processes. Even computers are utilized during the production, they are mainly utilized as a naive digital canvas. With the increasing computing power and decreasing cost of CPU & GPU, more computing resource can be exploited cost-effectively for intelligent and semi-automatic creation of aesthetics content. In this talk, we present our recent works on computational manga and anime, in which we aim at facilitating various production steps with the advanced computer technologies. Manga artists usually draw the backgrounds based on real photographs. Such background preparation is tedious and time-consuming. Some artists already make use of simple computer techniques, such as halftoning, to convert a given color photograph into B/W manga. However, the resultant mangas are so inconsistent in style and monotonous in pattern due to the single halftone screen. I will present a way to turn a color photograph into manga while preserving the color distinguishability in the original photo, just like what traditional manga artists do. On the other hand, there is a trend of migrating manga publishing from the traditional paper medium to the digital domain via the screen of portable devices. There are companies doing colorization for B/W mangas (of course, in a painstaking manual fashion) to allow users to read color manga on the portable devices. I will present a computer-assisted method to colorize an originally B/W manga into a color version by simply scribbling on the B/W version. Lastly, I will present our latest work on automatically conversion of 2D hand-drawn cel animations to stereoscopic ones. As it is infeasible to ask cel animators to draw stereo-frames, there is a rare number of stereo cel animation produced so far. I will present a method to exploit the scarce amount of depth cue left in the hand-drawn animation, in order to synthesize the temporal-consistent and visually plausible stereoscopic cel animation.

David M Breaux Jr.

Sr Character / Creature Animator & Animation Instructor, USA

Title: Animation from Motion Capture - Pitfalls, Potential and Proper uses...
Biography:

I have completed work on over 30+ projects and counting, during my time in the industry. I have a combined 16+ years of experience in Film, Games, Commercials and Television. I specialize in highly detailed character and creature performance animation, using both key framed and motion captured data or a hybrid of the two where appropriate. My professional film experience includes a diverse list of animal, character and creature types encompassing the fantastic, the realistic and the cartoony. My most recent released projects were for Blur Studios on Tom Clancy’s: The Division pre-release trailer and Halo: The Master Chief Collection cinematic remastering.

Abstract:

Motion capture is the practice of capturing the movements of a chosen subject, most often a human subject. motion capture has progressed greatly through many iterations of technology through the years. The mystery’s that remain seem to be when and how to use it..That statement is a little audacious I must admit, but there is good reason. Quite often motion capture in both games and film is viewed as a means to a quicker and cheaper solution. What is never taken into consideration is the inevitability of a director to change their mind, request adjustments and the ever popular dirty mo-cap data received from the supplier. These can often take as much time to repair, change or adjust and can be quite monotonous and taxing on the artists assigned the job. This isn’t to say mo-cap doesn’t have its place especially in film where realism in VFX laden movies is oddly in contrast to the ever less realistic scenarios the characters are thrust into. Motion capture is used very often in video games with the intention of adding to the realism of the game. What we end up with a lot of times is very weightless feeling characters… Why is that. The root of the problem is how the motion capture is being used, and the lack of cues that the eye and ultimately the human brain is using to register visual weight. With the hardwares technical ability to allow animators to include more and more details in animation of characters this becomes less of an issue, but understanding exactly what makes something look weightless informs our understanding of the best methods to use in our creations.

  • Simulation and Modeling

Session Introduction

Paul Fishwick

The University of Texas at Dallas, USA

Title: Leveraging the Arts for Modeling & Simulation
Speaker
Biography:

Paul Fishwick is Distinguished University Chair of Arts and Technology (ATEC), and Professor of Computer Science. He has six years of industry experience as a systems analyst working at Newport News Shipbuilding and at NASA Langley Research Center in Virginia. He was on the faculty at the University of Florida from 1986 to 2012, and was Director of the Digital Arts and Sciences Programs. His PhD was in Computer and Information Science from the University of Pennsylvania. Fishwick is active in modeling and simulation, as well as in the bridge areas spanning art, science, and engineering. He pioneered the area of aesthetic computing, resulting in an MIT Press edited volume in 2006. He is a Fellow of the Society for Computer Simulation, served as General Chair of the Winter Simulation Conference (WSC), was a WSC Titan Speaker in 2009, and has delivered over 16 keynote addresses at international conferences. He is Chair of the Association for Computing Machinery (ACM) Special Interest Group in Simulation (SIGSIM). Fishwick has over 230 technical papers and has served on all major archival journal editorial boards related to simulation, including ACM Transactions on Modeling and Simulation (TOMACS) where he was a founding area editor of modeling methodology in 1990. He is on the editorial board of ACM Computing Surveys.

Abstract:

Since its inception, computer graphics has played a major role in several areas such as computer-aided design, game development, and computer animation. Through the use of computer graphics, we enjoy artificial realities and the ability to draw figures within a flexible electronic medium. Computer simulation in computer graphics is generally construed to be one where the simulation is used to achieve realistic behavioral effects. But what if the naturally art-based design approaches in graphics could be used to visualize and manipulate the mathematical models used as the basis of simulation? This direction suggests that graphics, and the arts, can affect how we represent complex models. I’ll present approaches used in our Creative Automata Laboratory to reframe models as works of art that maintain an aesthetic appeal, and yet are highly functional, and mathematically precise.

Leonel Toledo

Instituto Tecnológico de Estudios Superiores de Monterrey Campus Estado de México

Title: Level of Detail for Crowd Simulation
Speaker
Biography:

Leonel Toledo recieved his PhD from Instituto Tecnológico de Estudios Superiores de Monterrey Campus Estado de México in 2014, where he currently is a full-time professor. From 2012 to 2014 he was an assistant professor and researcher. He has devoted most of his research work to crowd simulation and visualization optimization. He has worked at the Barcelona Supercomputing Center using general purpose graphics processors for high performance graphics. His thesis work was in Level of detail used to create varied animated crowds. His research interests include crowd simulation, animation, visualization and high-performance computing.

Abstract:

Animation of crowds and simulation finds applications in many areas, including entertainment (e.g. animation of large numbers of people in movies and games), creation of immersive virtual environments, and evaluation of crowd management techniques. Interactive virtual crowds require high-performance simulation, animation and rendering techniques to handle numerous characters in real-time. These characters must be believable in their actions and behaviors. The main challenges are to remove the least perceptible details first, to preserve the global aspect of at best and meanwhile, significantly improve computation times. We introduce a level of detail system which is useful for varied animated crowds, capable of handling several thousands of different animated characters at interactive frame rates. The system is focused on providing rendering optimization, and is extended to build more complex scenes. This level of detail systems allows us to incorporate physics to the simulation and modify the animation of the agents as forces are applied to the models in the environment, avoiding rendering and simulation bottlenecks as much as possible. This way it is possible to render scenes with up to a quarter million characters in real time at interactive frame rates.

Biography:

Abstract:

A huge challenge is to simulate tens of thousands of virtual characters in real-time where they pro-actively and realistically avoid collisions with each other and with obstacles present in their environment. This environment contains semantic information (e.g. roads and bicycle lanes, dangerous and pleasant areas), is three-dimensional (e.g. contains bridges where people can walk over and under as well) and can dynamically change (e.g. a bridge partially collapses or some fences are removed). We show how to create a generic framework centered around a multi-layered navigation mesh and how it can be updated dynamically and efficiently for such environments. Next, we show how (groups of) people move, avoid collisions and coordinate their movements, based on character profiles and semantics. We run our simulations in realistic environments (e.g. soccer stadiums or train stations) and game levels to study the effectiveness of our methods. Finally, we demonstrate our software package that integrates this research. Why would we need to simulate a crowd? The results can be used to decide whether crowd pressures do not build up too much during a festival such as the Lode Parade; to find out how to improve crowd flow in a train station; to plan escape routes for use during a fire evacuation; to train emergency personnel to deal with evacuation scenarios; to study a range of scenarios during an event; or to populate a game environment with realistic characters. After this presentation you'll understand why state-of-the-art crowd simulations need a more generic and efficient representation of the navigable areas, why speed and extendability is obtained by splitting up the simulation in at least five different levels, why we need a paradigm shift from graph-based to surface-based navigation, and why a path planning algorithm should NOT compute a path.

Speaker
Biography:

McArthur Freeman, II is a visual artist and designer who creates work that explores hybridity and the construction of identity. His works have ranged from surreal narrative paintings and drawings to digitally constructed sculptural objects and animated 3D scenes. His most recent works combine three interrelated emerging technologies: digital sculpting, 3D scanning, and 3D printing. Freeman’s work has been published in Nka Journal of Contemporary African Art and has been exhibited in over 50 group and solo shows within the United States. Freeman is currently an Assistant Professor of Video, Animation, and Digital Arts at the University of South Florida. Prior to his appointment at USF, Freeman taught at Clarion University, Davidson College, and North Carolina State University. He has also taught Drawing at the Penland School of Crafts. Freeman earned his BFA degree in Drawing and Painting from the University of Florida. He received his MFA from Cornell University, with a concentration in Painting. He also holds a Master of Art and Design from North Carolina State University in Animation, New Media, and Digital Imaging, which he received in 2008.

Abstract:

Much of CG technology is based on simulations of real-world practices. With the ability to paint with pixels, sculpt with polygons, render from virtual cameras, and digitally fabricate 3D forms, many new artists are increasingly meeting disciplines for the first time at their digital simulations. Furthermore, the digital environment often facilitates the integration of multiple disciplines and hybrid practices that are not inherent in their analog counterparts. This presentation will discuss the potential for the use of digital tools to address traditional processes for both learning and new hybrid practices. What can we learn from the conventions and philosophies, embedded in the software? How can we effectively integrate this technology into traditional arts courses without undermining the established disciplines? In what ways can we leverage hybrid practices for deeper understanding of the crafts involved?

Omar Khan and Paul Huang

Trident Associates, USA

Title: VR Based Landing Aid
Speaker
Biography:

OMAR KHAN is currently working as an Engineering Manager for an industrial and commercial firm. In his prior capacities, Omar served at various defense and commercial companies including United Defense, BAE Systems, MAV6 and Curtiss Wright where his roles included research and development, systems engineering, warfare and operations analysis, product management and international business development. Mr. Khan has authored several technical publications in the areas of modeling and simulation for naval weapon systems and holds patents in the same field. He received a Bachelor of Science degree in electrical engineering from the University of Engineering and Technology Lahore, a master of science in electrical engineering from Cleveland State University and an MBA from the University of Minnesota, Twin Cities.

 

Mr. Huang has over 40 years of engineering experience in academia, electro-hydraulic systems, sensor systems, servo systems, communication systems, robotics, system integration, ordnance systems, and modeling and simulation. Mr. Huang has worked on many naval and army weapon systems. Mr. Huang has served as a consultant for many industries in the area of sensor systems, test equipment, medical devices, and ordnance systems. Mr. Huang is a former artillery officer. He has co-authored two books and over 60 technical publications. Mr. Huang has taught technical courses in different countries. Mr. Huang also holds several US and international patents that ranging from devices, software, to systems. He received MSEE and Ph.D. degrees from University of Minnesota.

Abstract:

The use of computer generated high quality real-time video provides engineers/scientists/electronic game designers a powerful tool in so many applications that even the sky is no longer the limit. The advent of micro and nanoelectronics further enables complicated devices to be put into smaller, inexpensive, and robust packages. During the last few years, smaller video-image-based devices have been installed in land-based vehicles to enhance driving comfort, convenience, and safety. These include navigational aids, GPS, collision avoidance devices, surround-view systems and many others. The proliferation of these devices is mainly due to the relatively inexpensive and short life span of land vehicles compared to that of airplanes (and submarines). The authors in the past have developed a concept to aid helicopter pilots to land their craft when it is not possible to use the out-of-the-window view for a safe landing. This paper works further on the development of an aid for landing on a moving platform such as a shipboard heliport. For landing on a shipboard platform, in addition to the obstacles of water spray and mist (due to sea state conditions), frequent fog, and other weather related elements, a moving platform with six degrees of motion (three linear and three angular) creates even more challenges for the pilot. This paper provides a potential solution to the problems listed above. According to the analysis and preliminary computer simulation, the proposed landing aid may even have the potential to become an autonomous landing system and could be used in unmanned aerial vehicles as well.

Key words: real-time, computer generated video image, operator aid, and autonomous systems.

Speaker
Biography:

Aditya has been a professional 3D Designer since 2000. Aditya has worked on many games for gaming giants like EA, 2K, Disney famous titles for e.g. Need for Speed, Battle Forge, Harry Potter, Burnout Paradise etc. Currently he is a Deputy Director in Sunovatech Infra Pvt Ltd, India. Over the past 5 years, Aditya has delivered more than 140 projects related to 3D, Virtual Reality, Visualization and Simulation for Infrastructure and Transportation for 8 different countries. He is also working on computer games for engineering students for the University of Qatar. At Sunovatech Infra pvt ltd he leads a team of around +200 artist to create stunning 3d visualization and developing pc & mobile games. His projects are entertaining as well connect the viewer to interact with the project.

Abstract:

There had been an emphasys of simulation tools for transportation industry since early 80’s wheras on pedestran movements several studies and models had been researched since 90’s. Today there are tools that can provide the methatical analysis of the behaviours and pridections regarding the proposed development. These methatical enterpretations can only be understood by specialised Transport planners or Engineers, Whereas most critical decisions regarding any proposed development is the virtue of Political and Public will. The need for simplifying the Methatimatics and converging into a simplified visual medium that can be undersood by Public and politiations to asscess the impact is the reason behind the development of the algorithem that defines this paper. The raw mathematical outputs from the traffic simulations are converted to high quality 3D visualisation using a Virtual Reality rendering processor. Traffic simulation software concentrates on mathematical accuracy of the traffic behaviour rather than realistic and accurate visualisation of the traffic and its surroundings. This is primarily due to the inability of existing software to handle detailed, complex 3D models and structures in the simulation environment. This technology (VR Platform) is currently under the exclusive IP of Sunovatech and is used as the core part of Visualisation process wherein thouans of vehicles and pedestrians are animated as an automated process. Using the VR platform a highly realistic and accurate simulation of vehicles, pedestrians and their traffic infrastructure such as signals and buildings can be achieved. This technology offers decision makers, the traffic engineer and general public a unique insight into traffic operations. It is highly cost effective and an ideal tool for presenting complex ideas in any public consultation, presentation or litigation process. This presentation will focus on how to combine the realistic human and trasportation simulations in a 3D visualization along with Urban design elements. The use of simulation in all 3D visualization projects gives an accurate results to planners, engineers, architects and emergency response department to test and approve the design of the infrastructure. With this technology we have created stunning visualization and provide solutions to multi billion projects. With the integrate of 3D visualisation software with the Traffic Micro simulation tools to create a close to real environment in terms of behaviour, volumes, and routings. Caliberated and Validated Micro simulation models are being combined with the powerful rendereing tool to visualise proposals before they are implemented on ground.

Xiang Feng and Wanggen Wan

Shanghai University, China & University of Technology, Sydney, Australia

Title: Physically-based Invertible Deformation Simulation of Solid Objects
Biography:

Xiang Feng is a PhD student of School of Communication and Information Engineering, Shanghai University. He is also a member of Institute of Smart City, Shanghai University. He received his BE Degree at School of Communication and Information Engineering, Shanghai University in 2011. Since 2011 he has been doing Mater and PhD at School of Communication and Information Engineering, Shanghai University. He is a Dual Doctoral degree student between Shanghai University and University of Technology, Sydney. He was awarded the CSC Scholarship (China Scholarship Council) to study at University of Technology, Sydney between August 2014 and September 2015. He has authored six papers in internationally renowned journals and conferences in the areas of physically-based deformation and animation, and 3D modelling and reconstruction. He has been involved in several research projects including General Program of National Natural Science Foundation of China and National High Technology Research and Development Program of China.

Dr. Wanggen Wan has been a Full Professor at School of Communication and Information Engineering, Shanghai University since 2004. He is also Director of Institute of Smart City, Shanghai University and Dean of International Office, Shanghai University. He is vice Chair of IEEE CIS Shanghai Chapter and Chair of IET Shanghai Local Network. He is IET Fellow, IEEE Senior Member and ACM Professional Member. He has been Co-Chairman for many well-known international conferences since 2008. His research interests include computer graphics, video and image processing, and data mining. He has authored over 200 academic papers on international journals and conferences, and he has been involved in over 30 research projects as Principal Investigator. Dr. Wanggen Wan received his PhD degree from Xidian University, China in 1992. From 1991 to 1992, he was a Visiting Scholar at Department of Computer Engineering, Minsk Radio Engineering Institute, Former USSR. He was a Postdoctoral Research Fellow at Department of Information and Control Engineering, Xi’an Jiao-Tong University from 1993 to 1995. He was a Visiting Scholar at Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology from 1998 to 1999. He was a Visiting Professor and Section Head of Multimedia Innovation Center, Hong Kong Polytechnic University from 2000 to 2004.

Abstract:

With an increased computing capacity of computer, physically based simulation of deformable objects has gradually evolved into an important tool in many applications of computer graphics, including haptic, computer games, and virtual surgery. Among physically based simulation, large deformation simulation of solid objects has attracted many attentions. During large deformation simulation, especially interactive simulation, element inversion may arise. In this case, standard finite element methods and mass-spring systems are not suitable because they are not able to generate elastic internal forces to recover from the inversion. This presentation will describe a method for invertible deformation of solid objects. We derive internal forces and stiffness matrix of invertible isotropic hyperelastic material from the energy density function. This method can be applied to arbitrary isotropic hyperelastic material when the energy density function of deformable model is given in terms of strain invariants. In order to achieve realistic deformation, volume preservation capacity is always pursued as an important property in physically based deformation simulation. We will discuss about the volume preservation capacity of three popular invertible materials: Saint-Venant Kirchhoff material, Neo-Hookean material and Mooney-Rivlin material from the perspective of the volume term in the energy density function. We will demonstrate how the volume preservation capacity of these three materials changes with their material parameters, such as Lame coefficients and Poisson’s ratio. Since the process of solving the new positions of mesh object can be transferred to independently solving the displacement of each vertex from the motion equilibrium equation at each time step, it enables us to utilize CPU multithread method to speed up the calculations. We will also present a CPU multithread implementation of internal forces and stiffness matrix.

Speaker
Biography:

Seyed Reza Hashemi born in May 19 , 1986, received an BSc from Mechanical Engineering Department of Azad University of Najafabad, and an MSc from the Mechanical Engineering Department, Iran University of Science and Technology (IUST), Narmak, Tehran. He is now working as an engineering researcher on Industrial Automation, Mechatronics, Motion Control, and Robotics in his private company and he also works part time as a research assistant in Azad University.

Abstract:

Hardware-in-the-loop (HIL) is a type of real-time simulation test that is different from a pure real-time simulation test due to a real component added to the loop. Through applying the HIL technique, a component of a system can be tested physically in almost real conditions. Not only can this test save time and cost, but also there remain no concerns about the test safety. The tested component is often an electronic control unit (ECU), since most dynamic systems, especially in aerospace and the automobile industry,have a main controller (ECU). Sometimes, HIL is an area of interest for evaluating the performance of other mechanical components in a system. Since HIL includes numerical and physical components, a transfer system is required to link these parts. The transfer system typically consists of a set of actuators and sensors. In order to get accurate test results, the transfer system dynamic effects need to be mitigated. The fuel control unit (FCU) is an electro-hydraulic component of the fuel control system in gas turbine engines. Investigation of FCU performance through HIL technique requires the numerical model of other related parts, such as the jet engine and the designed electronic control unit. In addition, a transfer system is employed to link the FCU hardware and the numerical model. The objective of this study was to implement the HIL simulation of the FCU using LabView and MATLAB. To get accurate simulation results, the inverse and polynomial compensation techniques were proposed to compensate time delays resulting from inherent dynamics of the transfer system. Finally, the results obtained by applying both of the methods were compared.

  • Virtual and Augmented Reality

Session Introduction

John Quarles

University of Texas at San Antonio, USA

Title: Virtual Reality for Persons with Disabilities: Current Research and Future Challenges
Speaker
Biography:

Dr. John Quarles is an assistant professor at the University of Texas at San Antonio in the department of computer science. Dr. Quarles is both a virtual reality researcher and a multiple sclerosis patient, who has an array of disabilities. This gives him a unique perspective on how virtual reality can potentially improve the quality of life of persons with disabilities. In 2014, he received the prestigious National Science Foundation’s CAREER award for his work in this area.

Abstract:

Immersive Virtual Reality (VR) has been in research labs since the 1960s, but it will soon finally make it into the home (hopefully). Facebook’s $2 billion acquisition of Oculus, a small Kickstarter funded startup for immersive head mounted displays, was a historical landmark in 2014 towards the goal of affordable, home-based VR systems. However, what impact will this have on persons with disabilities? Will at-home VR be universally usable and accessible? Based on the current research in VR, there are many challenges that must be overcome for VR to be usable and beneficial for persons with disabilities. Although researchers have studied fundamental aspects of VR displays and interaction, such as the effects of presence (i.e., the sense of ‘being there’, the suspension of disbelief), interaction techniques, latency, field of view, and cybersickness, etc., almost all of the prior research has been conducted with healthy persons. Thus, it is not known how to effectively design an immersive VR experience for persons with disabilities, which could have a significant impact on emerging fields like VR rehabilitation and serious games. This talk explores what we know (or what we think we know) about how persons with disabilities experience VR and highlights the grand challenges that if met, could significantly improve quality of life for persons with disabilities.

Speaker
Biography:

Sunil Thankamushy is a US based seventeen-year video game industry professional who was part of core teams that developed highly acclaimed, and successful video game franchises such as CALL OF DUTY™: FINEST HOUR™, MEDAL OF HONOR™ etc. A graduate of UCLA, Sunil was hired by DreamWorks Interactive studio as one of its first animators. After seven years of working at DreamWorks Interactive, and later Electronic Arts, he joined hands with other game veterans to co-found Spark Unlimited™, a game studio based in Los Angeles. Five years after its inception, Spark has a long body of work that includes helping launch the now multi-billion dollar CALL OF DUTY™: FINEST HOUR™ franchise, TURNING POINT™: FALL OF LIBERTY™, LEGENDARY™, LOST PLANET 3™ etc. Blending technology and animation has been a passion for Sunil. In every stage of his career, he has successfully created animation paradigms and technology to improve the level of immersion in the virtual environment of the game, and heighten the experience of realism for the player. After shipping more than 8 video game titles, Sunil decided to change his life direction and set up DEEPBLUE Worlds Inc, a knowledge-based games studio to make innovative products for children. His most current product is an Augmented Reality based mobile app called, DINO ON MY DESK. Sunil recently joined Mt.San Antonio College, Walnut, California. as professor of gaming and animation. He lives with his wife Diana, and 2 children in beautiful San Diego, California.

Abstract:

This is a talk describing my experiences with my team, using the strengths of Augmented Reality (AR) to design a fun and educational app series. The name of the product is, Dino On My Desk. The core technologies we used is the Qualcomm Vuforia as the AR platform, in conjunction with Unity for the gaming engine. As a newcomer into the field at the time, we found that our best resource was our own DIY spirit to crack into the area. I hired a loosely networked team of developers from around the world, including past students of mine (I teach animation and gaming at Mt.SAC college in California) to get the job done. The initial iteration was a 'confidence building exercise' for us all, and see a mockup of the product. The proof of this was in the fact that we were, with very few features, able to entertain test audiences running our AR app on their mobile devices. The next two iterations were built over the previous ones, each time tacking on functionality and engagement methodically. I am a firm believer in the idea that to be effective, a product has to leverage the uniquenesses offered by the technology it is built on. In the process of building this product, we were continually uncovering what the uniquenesses in interaction that AR offered, were. An overview of the AR genres that have evolved over the past few years, and the companies behind it show a trajectory that starts from the 'Magical', through to 'Function-driven', and finally to 'Enrichment-driven'. Finally, I would like to demo my product that started the journey for me in the mesmerizing field of Augmented Reality.

Adam Watkins

Augmented Ideas, LLC, San Antonio, Texas, USA

Title: Participatory Museum Experiences…Augmented
Speaker
Biography:

Adam Watkins is the CEO of Augmented Ideas, LLC (http://www.augmentedideas.com) and Professor of 3D Animation & Game Design (http://www.uiw3d.com) in the School of Media & Design at the University of the Incarnate Word in San Antonio. Watkins holds an MFA in 3D Animation and a BFA in Theatre Arts from Utah State University. He is the author of 12 books and over 100 articles on 3D animation and game design.

Abstract:

In two recent exhibitions, the McNay Art museum in San Antonio, Texas was looking for ways to convert visitors into participants. In it’s search for ways to engage patrons, the McNay partnered with Augmented Ideas, LLC, led by two University of the Incarnate Word professors to create new experiences centered around the exhibitions. The first exhibition - Real/Surreal¬- was a traveling exhibition of surreal, hyper real and realistic paintings. In this exhibit, augmented reality was used to create a discovery “game” in which the visitor finds visual clues within the painting. Once found, the clue unlocks questions, activities, and information about the paintings on top of the paintings themselves. This activity encourages visitors to look more carefully and actively at the paintings, and allows a variety of multi media experiences without interfering with the ability of the museum patron to experience the original art. The second exhibition - CUT! Costumes and the Cinema - patrons used their mobile device to collect virtual versions of the costumes on display. By using the camera features of the mobile device, they were then able to “try on” these costumes in a virtual dressing room. The results of this virtual dressing room could be shared with friends and the McNay for use on their Facebook page. Together, using unobtrusive augmented reality techniques, the McNay was able to engage a new generation of patrons, provide an entirely new level of interaction and information without using any exhibition space or imposing on the original art works.

David Mesple

Rocky Mountain College of Art and Design

Title: The Virtual Normative Body, Fact or Fiction?
Speaker
Biography:

David Mesple’ is an American artist who exhibits around the world. His work has been profiled in texts, magazines, music CD’s, and public television presentations. He is one of the few contemporary artists to be honored with a 2-person exhibit with one of the Masters of Western Art- Rembrandt van Rijn. He is a non-dominant left-brained and right-brained artist, capable of linear, multi-linear, and non-linear thinking, and does not compartmentalize information, nor assert that knowledge resides exclusively within certain disciplines or domains. David believes that all information lies on a spectrum of immense complexity and diversity, and is available to all problem-solvers. Mesple’ is a Professor of Art and Ideation at Rocky Mountain College of Art and Design, and is working on his Interdisciplinary PhD in “Virtuosity Studies” combining Fine Arts, Neuroscience, Physics and Philosophy.

Abstract:

There are three representative types of virtual normative bodies; virtual representation/simulacra/mimesis of an actual human normative body, the virtual normative human body within the genre of CGI-manifested characters, the performance of the virtual normative human that is not embodied visually. The first representation/simulacra/mimesis has become alarmingly believable. Audiences struggle to detect virtuality in cases where exact mimesis is the goal. But just as the Greeks discovered when they were able to make exact marble replicas of the human body, the neurological trait of being “hardwired to abstraction” (Ramachandran), led to making non-normative human sculptures, increasing aesthetic appeal. We see this in representations of the body in art and advertising today, so it comes as no surprise to see mimesis altered for purely aesthetic purposes, not just for super-natural narratives. In a time where the real and the virtual are becoming inseparable, philosopher Paul Virilio describes a new upheaval to our real-time perspective, akin to the effect of perspective created during the Quattracentro. Virilio describes “a very strange kind of perspective, a “stereo” perspective, real space- real time, which gives us another kind of “relief”, forcing reconfigurations of culture and virtual characters within the mediums of film and performance. The history of performing the virtual normative body in film and theater may begin with the sotto voce of an invisible actor, Rosaline, in film versions of Romeo And Juliet, the Wizard of Oz, culminating in Samantha, the non-normative, non-embodied character in Spike Jonze’s “Her”. As Virilio’s “stereo” perspective becomes normative, this paper focuses on how performing the normative body, virtually, redefines roles of actors, directors, and audiences.

David Mesple

Rocky Mountain College of Art and Design, USA

Title: The Virtual Normative Body, Fact or Fiction?
Biography:

David Mesple’ is an American artist who exhibits around the world. His work has been profiled in texts, magazines, music CD’s, and public television presentations. He is one of the few contemporary artists to be honored with a 2-person exhibit with one of the Masters of Western Art- Rembrandt van Rijn. He is a non-dominant left-brained and right-brained artist, capable of linear, multi-linear, and non-linear thinking, and does not compartmentalize information, nor assert that knowledge resides exclusively within certain disciplines or domains. David believes that all information lies on a spectrum of immense complexity and diversity, and is available to all problem-solvers. Mesple’ is a Professor of Art and Ideation at Rocky Mountain College of Art and Design, and is working on his Interdisciplinary PhD in “Virtuosity Studies” combining Fine Arts, Neuroscience, Physics and Philosophy.

Abstract:

There are three representative types of virtual normative bodies; virtual representation/simulacra/mimesis of an actual human normative body, the virtual normative human body within the genre of CGI-manifested characters, the performance of the virtual normative human that is not embodied visually. The first representation/simulacra/mimesis has become alarmingly believable. Audiences struggle to detect virtuality in cases where exact mimesis is the goal. But just as the Greeks discovered when they were able to make exact marble replicas of the human body, the neurological trait of being “hardwired to abstraction” (Ramachandran), led to making non-normative human sculptures, increasing aesthetic appeal. We see this in representations of the body in art and advertising today, so it comes as no surprise to see mimesis altered for purely aesthetic purposes, not just for super-natural narratives. In a time where the real and the virtual are becoming inseparable, philosopher Paul Virilio describes a new upheaval to our real-time perspective, akin to the effect of perspective created during the Quattracentro. Virilio describes “a very strange kind of perspective, a “stereo” perspective, real space- real time, which gives us another kind of “relief”, forcing reconfigurations of culture and virtual characters within the mediums of film and performance. The history of performing the virtual normative body in film and theater may begin with the sotto voce of an invisible actor, Rosaline, in film versions of Romeo And Juliet, the Wizard of Oz, culminating in Samantha, the non-normative, non-embodied character in Spike Jonze’s “Her”. As Virilio’s “stereo” perspective becomes normative, this paper focuses on how performing the normative body, virtually, redefines roles of actors, directors, and audiences.

Speaker
Biography:

Kenneth Ritter is a research assistant and graduate student at University of Louisiana (UL) in Lafayette, Louisiana. Ritter is working on a PhD in Systems Engineering with an expected graduation date in December, 2016. Ritter obtained a Masters of Science in Solar Energy Engineering from Högskolan Dalarna in Borlange, Sweden. At UL, Ritter has directed the creation of the Virtual Energy Center, an educational game using a scale CAD model of the Cleco Alternative Energy Research Center in Crowley, Louisiana. Ritter has experience with AutoCAD, Soidworks, Unity3D, and programming with C# and Javascript. Currently, Ritter is working to develop an immersive networked collaborative virtual reality environment for education about alternative energy technologies.

Abstract:

As the interest in Virtual Reality (VR) increases, so does the number of software toolkits available for various VR applications. Given that more games are being made with the Unity game engine than any other game technology, several of these toolkits are developed to be directly imported into Unity. A feature and interaction comparison of the toolkits is needed by Unity developers to properly suit one for a specific application. This paper presents an overview and comparison of several virtual reality toolkits available for developers using the Unity game engine. For comparing VR interaction a scene is created in Unity and tested at the three-sided Cave Automatic Virtual Environments (CAVE) at Rougeou VR Lab. In the testbed scene, the user must disassemble the major components of the Electrotherm Green Machine at the Virtual Energy Center. The three toolkits that met the criteria for this comparison are getReal3D, MiddleVR, and Reality-based User Interface System (RUIS). Each of these toolkits can be imported into a Unity scene to bring VR interaction and display on multi-projection immersive environments like CAVEs. This paper also provides how to guides which can easily assist users to install and use these toolkits to add VR applications to their Unity game. A comparative analysis is given on performance, flexibility and ease of use for each toolkit regarding VR interaction and CAVE display. MiddleVR was found to have the highest performance and most versatile toolkit for CAVE display and interaction. However, for some display applications such as CAVE 2, the getReal3D toolkit maybe better fitted. Regarding cost, RUIS is the clear winner as it is available for free under the Lesser General Public License (LGPL) Version 3 license.

Speaker
Biography:

Andres Montenegro is the coordinator of the Modeling and Animation Concentration Area from the Department of Visual Communication and Design, at the College of Visual and Performing Arts at Indiana University-Purdue University, Fort Wayne, Indiana. His work develops immersive environments using real time 3D animations while integrating physical computing in installations based on interactive responses and multichannel projections. He has extensive experience with software and hardware oriented toward the generation of different styles of rendered images. Painting is his main source of inspiration and subject of research.He received his BFA in Art and Education from University of Chile in 1986, his MA from University of Playa Ancha Chile in 1996, and his MFA in Digital Arts from University Of Oregon in 2006. While he was studying there he was awarded the Clarice Krieg Scholarship, and University of Oregon Scholarship in 2004, 2005, and 2006.

Abstract:

This presentation will articulate the conceptual and practical implementation of an interactive system based on animations and 3D models. It will utilize the Augmented Reality quick responses (QR) display graphics. The proposed model will open a discussion about how to display a dynamic navigation within an artificial setting or environment created through AR as well. Augmented Reality in the world of computer graphics is simply defined as the action of superimposing, via software generation, an artificial construction (computer generated) over the real world surface. This visualization process occurs when the camera of a mobile device like the iPhone, iPad, or other holographic optical based gadget, perceives and exposes graphics and images linked to a marker component that is attached to a real world object. Today the potential of interactive animations and images combined with text makes the content development in Augmented Reality a very promising venue to implement an artistic narrative based on multiple responses. Of course the viewer will be able to organize or manipulate this system. The same conceptual and practical model can be implemented for Virtual Reality immersive environments. In this presentation there will be several examples developed by my students, including my personal projects as well. The audience will appreciate the use of tactile gestures, body movements (through accelerometers) and other sensing capabilities provided by mobile devices (based on Android, or iOS). The ultimate goal of the presentation is to feature a compelling narrative based on an experiential phenomenological approach. It will be achieved by the manipulation of animations, images, 3D models, and virtual environments.

Biography:

Traditional and CG Modeler Animator with a BA in Visual Communications from SAIC with a focus on Animation is creating virtual worlds via Wholebitmedia. He has made the crossover from animation to game programming using traditional concepts in the 3D space for interactivity. Deeply influenced by Geek culture through manga and science fiction including William Gibson as well as Masamune Shirow, electronica and rave culture. Highly active in the Houston community as a member of various social groups such as the Houston Unity Meetup, Animate Houston, Girl Develop It, and VR Houston and is working on VR Worlds out an interest in human interactivity with metaphysics, and has incorperated that interest into his thought structures for artificial intelligences and gaming algorthyms.

Abstract:

As the networks we use overcome the significance of our actual physical data virtual worlds become a more accurate representation of a our surroundings. Virtual worlds and metaphysics go hand in hand in a matrix of accidental artificial intelligence in the form of statistical data collected about our activity. When we wonder about the possibility of virtual versus real we must consider our need for a metaphysical connection with the technology which goes beyond the data. Our connection to the real world is lost when we fail to realize the potential of the networks we use. During this talk the speaker will engage the audience to search for or explain different types of metaphysical experiences and re-evaluate them as part of a paradigm for artistic endeavor. As we move into an age where computers will no longer be limited in computing power so to will the artists be free to be unlimited in their creativity. We currently live in a dark age metaphorically speaking when it comes to communications between individuals not only on, but also off the grid. Though all of us are connected though technology the use of these tools at a higher level of communication remains in its infancy. Metaphysics is thought of as something illusory or spiritual but most can claim to have experienced a moment in time whereupon time itself represented itself as something intangible. As humanity moves into this new cyber-terrain where physics takes on less significance than metaphysics we lose sight of the potential of humanity behind the benchmark of computational ability. It has become more important to emphasize and create new languages for computers an scientists to use rather than to create a new set of standards by which humans can be measured by. The speaker plans to discuss the current state of computing as it relates to the engagement of the viewer with relevant relativistic content.

  • Visualization

Session Introduction

Howard Kaplan

University of South Florida, USA

Title: Tactile Visualization
Speaker
Biography:

Howard Kaplan is the head of the Advanced Visualization Center at the University of South Florida in Tampa. He uses multiple aspects of visualization as a means of study and application. Many of his visualization applications revolve around real-world data, 3D graphics and simulation, and 2d interactive media. He received a BFA from Ringling College of Art, and an M.Ed from the University of South Florida. He is currently pursuing a Ph.D, in Engineering Science, Biomedical and Chemical Engineering. HIs work has been featured in the journal Science, Wired.com, AMC Siggraph, and Discovery.com. He was also selected by the Center for Digital Education as a Top 30 Technologists, Transformers and Trailblazers in 2014.

Abstract:

Most classrooms utilize generic two-dimensional representations in the form of scientific illustrations. In this talk we discuss various academic practices that have been used to enhance learning utilizing 3d printing and digital modeling technologies. Topics will explore the integration of data to digital models and finally to physical objects. The use of multiple 3d software applications will be demonstrate the process involved with modeling, encoding, preparing, and printing digital models. This presentation will allow users to take an expanded view of interdisciplinary approaches to developing 3d print ready models with added information in the form of tactile visualizations. In this way the students can feel the object and get some sense about the concept upon which the data is based. Additionally, this would allow customizing and individualizing of educational material. By providing a physical and tactile representation, as well as the opportunity to take part in the process of creating tactile visualizations; we believe will more effectively and efficiently aid in the development of mental images, transfer of prior knowledge to new context, as well as positively contribute to shared and authentic collaborative learning experiences. As an example one particular area of interest is in using 3d printing technology as an educational tool for blind and visually impaired learners.

Speaker
Biography:

Rebecca Ruige Xu currently teaches computer art and animation as an Associate Professor in College of Visual and Performing Arts at Syracuse University. Her artwork and research interests include experimental animation, visual music, artistic data visualization, interactive installations, digital performance and virtual reality. Her recent work has been appeared at: Ars Electronica Animation Festival; SIGGRAPH Art Gallery; Museum of Contemporary Art, Italy; Aesthetica Short Film Festival, UK; CYNETart, Germany; International Digital Art Exhibition, China; Los Angeles Center for Digital Art; Boston Cyberarts Festival. She has also been a research fellow at Transactional Records Access Clearinghouse, Syracuse University since 2011.

Abstract:

In recent years we have seen an increasing interest in data visualization in the artistic community. Many data-oriented artworks use sophisticated visualization techniques to express a point of view or persuasive goal. Meanwhile the attitude that visualizations can be used to persuade as well as analyze has been embraced by more people in the information visualization community. This talk shares my experience and reflection in creating data visualization as artwork via case study of two recent projects. It presents a workflow from conceptual development, data analysis, to algorithm development, procedural modeling, and then final image production. It hopes to offer insight into the artist’s effort of finding balance between persuasive goals and analytic tasks. Furthermore, it raises the question of the roles of artistic data visualization played in assisting people to comprehend data and the influence of this artistic exploration in visualization might have injected in shifting public opinions.

Speaker
Biography:

Robert S. Laramee received a bachelor’s degree in physics, cum laude, from the University of Massachusetts, Amherst (ZooMass) in 1997. In 2000, he received a master’s degree in computer science from the University of New Hampshire, Durham. He was awarded a PhD from the Vienna University of Technology (Gruess Gott TUWien), Austria at the Institute of Computer Graphics and Algorithms in 2005. From 2001 to 2006 he was a researcher at the VRVis Research Center (www.vrvis.at) and a software engineer at AVL (www.avl.com) in the department of Advanced Simulation Technologies. Currently he is an Associate Professor at Swansea University (Prifysgol Cymru Abertawe), Wales in the Department of Computer Science (Adran Gwyddor Cyfrifiadur). His research interests are in the areas of big data visualization, visual analytics, and human-computer interaction. He has published over 100 peer-reviewed papers in scientific conferences and journals and served as Conference Co-Chair of EuroVis 2014, the premiere conference on data visualization in Europe. His work has been cited over 2,400 times according to Google Scholar and his research videos have been viewed over 6,000 according to YouTube.

Abstract:

With the advancement of simulation and data storage technologies and the ever-decreasing costs of hardware, our ability to derive and store data is unprecedented. However, a large gap remains between our ability to generate and store large collections of complex, time-dependent simulation data and our ability to derive useful knowledge from it. Visualization exploits our most powerful sense, vision, in order to derive knowledge and gain insight into large, multi-variate flow simulation data sets that describe complicated and often time-dependent events. This talk presents a selection of state-of-the art flow visualization techniques and applications in the are of computational fluid dynamics (CFD) and foam simulation, showcasing some of visualizations strengths, weaknesses, and, goals. We describe inter-disciplinary projects based on flow and foam motion, where visualization is used to address fundamental questions-the answers of which we hope to discover in various large, complex, and time-dependent phenomena.

  • Rendering

Session Introduction

Scott Swearingen and Kyoung Lee Swearingen

University of Texas at Dallas, USA

Title: Pushing the Physical Arts Deeper into Real-Time Rendering
Speaker
Biography:

Scott Swearingen is an artist, developer, and educator who creates interactive multimedia spaces that blur the boundaries between the virtual and practical. He has been working at the intersection of art and technology for nearly 20 years specializing in the categories of digital imaging, kinetic sculpture, video games, and virtual environments. His work has been widely published and has garnered recognition from the Academy of Interactive Arts and Sciences as well as the Game Developers Choice Awards. He has collaborated on several award-winning franchises including Medal of Honor, The Simpsons, Dead Space and The Sims. Kyoung Lee Swearingen has worked in the film industry for the last decade on a variety of features and shorts including Ratatouille, Walle, UP, Cars 2, Toy Story 3, Brave, Monsters University, Presto, La Luna, The Blue Umbrella, Mater’s Tall Tales, Partly Cloudy, Ant Bully, and the Jimmy Neutron TV series. As a Technical Director of Lighting at Pixar Animation Studios, Kyoung focused on visual story-telling, mood, and look development through lighting. Her work has claimed numerous awards from the Academy Awards, BAFTA, Visual Effects Society, The American Film Institute, as well as many others.

Abstract:

The primary motivation behind our research is to push the physical arts deeper into the CG pipeline for rendering virtual environments. Using photogrammetry and 3d printing technologies, our process enables sculptors and painters to see their physical artworks move beyond the constraints of preproduction. Deviating from the traditional video game production pipeline, we print our low-resolution collision models as physical objects that are then sculpted upon and later scanned for reintegration. In addition to this process, we will also discuss calibration methods that strengthen our ability to iterate quickly, as well as maximizing texture resolution in order to maintain the integrity of the original artwork. By interjecting new technologies into established production models, we have created a unique pipeline for studios and new opportunities for artists.

Tom Bremer

The DAVE School, Visual Cue Studios LLC

Title: Render Passes: Taking control in CG Compositing
Speaker
Biography:

Tom Bremer started his artistic career more than 10 years ago as a hobby, and quickly realized his potential. After moving to Los Angeles in 2007, he has worked with many studios including Rhythm and Hues, Disney, Pixomondo, and Zoic Studios, where, for his work on CSI: Crime Scene Investigation, won a prime time Emmy award for outstanding visual effects. He has also won multiple Telly awards for his work throughout the years. Some credits also include “The Hunger Games”, “Disney’s Planes”, “Terra Nova”, and “Grimm”. Tom is currently the Production Instructor at The Digital Animation & Visual Effects School in Orlando, Florida.

Abstract:

While it used to be behind the scenes movie magic, the average person on the street now knows that taking a VFX element or green screen footage of an actor requires a certain amount of compositing in a 2D compositing package such as The Foundry’s Nuke or Adobe’s After Effects. What many people don’t know is that compositing fully CG rendered films requires just as much, if not more, compositing of elements. My lecture will introduce the audience to what render passes are, the benefits and drawbacks of compositing CG assets using passes, some basic techniques, but also some more advanced techniques that even artists working in the industry might not realize is a possibility. I would also like to show a recent animated short I wrote and directed which used the same techniques that I would be speaking about, as well as show a breakdown of some of the animated shots. This lecture will have something for everyone and will help demystify the art of render passes and compositing.