Day :
- Animation
Session Introduction
Tom Sito
The University of Southern California, USA
Title: Computer Animation at the Half-Century: How Did We Get Here?
Biography:
Tom Sito has been a professional animator since 1975. One of the key players in Disney’s animation revival in the 1990s, he animated on such classic films as The Little Mermaid (1989), Beauty and the Beast (1991), and The Lion King (1994). He is Chair of The John C. Hench Division of Animation and Digital Arts at the School of Cinematic Arts at the University of Southern California. President Emeritus of the Animation Guild, Local 839, Hollywood. He is author of several books including Drawing the Line: The Untold Story of the Animation Unions from Bosko to Bart Simpson ( Univ Press of Kentucky, 2006), and Moving Innovation, a History of Computer Animation (MIT Press, 2013)
Abstract:
Fifty years ago a graduate student at MIT completed his thesis project by creating the first ever animation program on a declassified Cold War computer used to track Soviet nukes. In the intervening years Computer Graphics (or CG) has forever changed the way we experience media. Without CG the Titanic would not sink. The armies of Middle Earth could not march. We would never know Shrek, Lara Croft, Buzz Lightyear or the Naavi. It has made movie film itself an anachronism. Yet few today understand its origins. Ask seven professionals what was the first computer graphics in a major motion picture, and you will probably get seven different answers. There is more to the history of CG than one day George Lucas rubbed a lamp and Pixar popped out. Tom Sito, author of the first ever complete history of CG, describes how an unlikely cast of characters—math nerds, experimental artists, beatniks, test pilots, hippies, video gamers and entrepreneurs shared a common dream, to create art with a computer, heretofore considered only a machine for calculations. Together they created something no one asked for, and no one knew they wanted, and they used it to change all the world’s media.
Sean McComber, Eric Farrar, Todd Fechter, and Kyoung Lee Swearingen
University of Texas at Dallas, USA
Title: Building an Animation Production Course for University Animation Students
Biography:
Sean McComber is an Assistant Professor of Animation in Arts and Technology (ATEC) at the University of Texas at Dallas. He graduated from Savannah College of Art and Design with a B.F.A. in Computer Art and an emphasis in Animation and received his M.F.A in ATEC from UTD. After graduating, Sean was accepted into the internship program at Rhythm & Hues Studios, a visual effects production company for film. Sean rose from intern to Lead Animator and eventually traveled to Rhythm & Hues’ Mumbai, India, facility as Supervising Animator. Sean is currently teaching classes in Character Animation.
Eric Farrar is an Assistant Professor of 3D Computer Animation in Arts and Technology (ATEC). He graduated from The Ohio State University where he completed an MFA in Computer Animation and Visualization working through the Advanced Computing Center for Art and Design (ACCAD). Eric then went to work for the Los Angeles based visual-effects studio, Rhythm & Hues where he worked as a character rigger creating bone and muscle systems for digital characters for films such as Night at the Museum and The Chronicles of Narnia: The Lion, Witch and the Wardrobe. Eric is currently teaching classes in 3D animation including courses specifically focused on the more technical side of character rigging.
Todd Fechter is an Associate Professor of Animation and current Interim Director of the school of Arts, Technology and Emerging Communication. He graduated with an MFA in Computer Animation and Visualization from The Ohio State University in 2002. Fechter has worked in and around the animation industry for the past thirteen years, having worked as a modeler, rigger, and modeling supervisor for studios including DNA Productions and Reel FX. Fechter currently teaches courses in modeling and pre-production.
Abstract:
Preparing students for careers in the animation industry can be a challenge. Over the past three years we have developed an Animation Production Studio course in which we strive to mimic a studio production environment. In this course students have the opportunity to drive the entire production pipeline including story development, layout, modeling, texturing, rigging, animation, lighting, rendering/compositing, and sound design, as well as project planning and management. Students work in a collaborative environment and develop skills with specific production tasks in addition to gaining critical experience in working as part of a large, multi-disciplinary team with definite production goals and deadlines. The problem solving and time management skills developed in this course help prepare our students not only for the film and game industries, but also for the myriad new and emerging areas of animation and visualization. This lecture will discuss the structure of the course, what has and has not worked over the past three years, and how the evolution of this course has helped to prepare students for work after college, drive the growth and direction of the ATEC animation program, and create several award winning short films.
Abdennour El Rhalibi
Liverpool John Moores University, UK
Title: Coarticulation and Speech Synchronization in MPEG-4 Based Facial Animation
Biography:
Abdennour El Rhalibi is Professor of Entertainment Computing and Head of Strategic Projects at Liverpool John Moores University. He is Head of Computer Games Research Lab at the Protect Research Centre. He has over 22 years' experience doing research and teaching in Computer Sciences. Abdennour has worked as lead researcher in three EU projects in France and in UK. His current research involves Game Technologies and Applied Artificial intelligence. Abdennour has been leading for six years several projects in Entertainment Computing funded by the BBC and UK based games companies, involving cross-platform development tools for games, 3D Web-Based Game Middleware Development, State Synchronisation in Multiplayer Online Games, Peer-to-Peer MMOG and 3D Character Animation. Abdennour has published over 150 publications in these areas. Abdennour serves in many journal editorial boards including ACM Computer in Entertainment and the International Journal of Computer Games Technologies. He has served as chair and IPC member in over 100 conferences on Computer Entertainment, AI and VR. Abdennour is member of many International Research Committees in AI and Entertainment Computing, including IEEE MMTC IG: 3D Rendering, Processing and Communications (3DRPCIG), IEEE Task Force on Computational Intelligence in Video Games and IFIP WG 14.4 Games and Entertainment Computing.
Abstract:
In this talk, Prof. Abdennour El Rhalibi will present an overview of his research in game technologies at LJMU. He will present some recent projects developed with BBC R&D, on game middleware development and in facial animation. In particular he will introduce a novel framework for coarticulation and speech synchronization for MPEG-4 based facial animation. The system, known as Charisma, enables the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework also enables real-time model simplification using quadric-based surfaces. The coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro\'s model of coarticulation adapted to MPEG-4 facial animation (FA) specification. He will also discuss some experiments which show that the coarticulation technique gives overall good results when compared to related state-of-the-art techniques.
Biography:
Lauren Carr joins the Department of Art and Design as an Assistant Professor in the Animation/Illustration program. She has worked professionally for Disney Feature Animation, Cinesite, Sony Pictures Imageworks, and Dreamworks Animation. Some of her film projects include Tangled, Meet the Robinsons, Chicken Little, X-Men United, Rio, and Ice Age 4. Prof. Carr was a character simulation technical director at Blue Sky Studios and, prior to coming to Montclair State University, had taught for the School of Visual Arts in the Department of Computer Art, Computer Animation & Visual Effects.
Abstract:
“3D Animation” is commonly associated with characters animating; yet this arguably young medium can create aesthetic expression far beyond characters animated to tell a story. Exploiting 3D animation software by utilizing its tools with traditional art forms and media for experimental art is seldom considered, despite its powerful potential. At the intersection of fine art and 3D animation, students can yield undiscovered creative techniques and problem-solving. There is the powerful potential of discovery in the art academy academia by connecting these two learning paths and likely innovative curriculum solutions resulting in communal learning and discovery amongst students and professors. This session seeks to explore an interdisciplinary approach combining 3D animation software with traditional art media. This conference talk explores theorizing and implementation of methods that combine fine art and 3D animation studies. Presentation embodies the analyses of personal practices of implementing traditional media with 3Danimation software.
David M Breaux
Sr Character / Creature Animator & Animation Instructor, USA
Title: Facial Animation - Phoneme’s be gone…
Biography:
I have completed work on over 30+ projects and counting, during my time in the industry. I have a combined 16+ years of experience in Film, Games, Commercials and Television. I specialize in highly detailed character and creature performance animation, using both key framed and motion captured data or a hybrid of the two where appropriate. My professional film experience includes a diverse list of animal, character and creature types encompassing the fantastic, the realistic and the cartoony. My most recent released projects were for Blur Studios on Tom Clancy’s: The Division pre-release trailer and Halo: The Master Chief Collection cinematic remastering.
Abstract:
Any serious animator worth their weight in frames has seen Preston Blair’s mouth expressions or heard of using phoneme’s for animating a characters lip sync… In it’s day this was quite an effective way for animators to break down dialogue into something manageable. The problem is hand drawn animation has never needed to been interested in recreating perfectly believable lip-sync after all the starting point of traditional hand drawn animation is already several steps away from realism. This thinking however has carried over into CG animation in a couple of ways… Often character rigs will have predefined shapes for a character which rightful so can be art directed which is often a desired trait especially if there is a large animation team or a specific thing a character is known for. However these confine you to that shape… And create more work for riggers an modelers. Animators also loose a bit of control by the nature of this system. This system is also used often in games to automate facial animation since they often have a lot more dialogue to address than most feature films….However it produces over chattery results hurting the visuals and even kicking the player out of their suspension of disbelief. I am proposing a different method which now that CG offers us the ability for better or worse to infinitely tweak our animation to achieve the most subtle of motion. This is a technique I’ve developed over my 16+ year animating characters and creatures who needed to speak dialogue and it involves a deeper understanding how humans speak, what our mouths are capable of doing muscular wise, and how we perceive what someone is saying in a visual sense. It also takes some burden off the modelers and riggers, and simplifies controls for animators while increasing the control it affords them. I didn’t invent this, nature did…I’ve just refined how I think about it and distilled it down into a description that I’ve never heard explained this way. My students are very receptive to this approach and often find it takes the mystery out of effective lip-sync making it easier and faster to produce than they thought. Performance and LipSync are my favorite things to work on as an animator.
Benjamin J Rosales
Terra State Community College, USA
Title: Virtual Instruction: a New Approach to Educating the 3D Artist
Biography:
Ben is a graduate of Ringling College of Art + Design’s renowned Computer Animation program as well as Texas A&M University’s College of Architecture. Ben also spent a year at Carnegie Mellon’s Entertainment Technology Center. Ben moved his family to Iowa in 2011 to help create the computer animation program at South-eastern Community College in West Burlington, IA. While there, he guided two animation teams in the production of their award-winning shorts at the national Business Professionals of America animation competition last year. Ben shares with students the knowledge and skills he continually gains from his own experiences in the animation industry. Prior to teaching, Ben worked as a character animator at Reel FX in Dallas, TX on Sony’s “Open Season 3”. While at Reel FX, Ben also did clean-up work on Open Season 3, Looney Shorts, Webosaurs, and DC Universe as well as managed the Render Farm at night.
Abstract:
This presentation will address the incorporation of new methods, technologies, and tools for a more accessible and streamlined system to train the next generation of 3D artists. It will compare and contrast traditional tools and methods with new and emerging ones as well as highlight the pros and cons of each. It will also demonstrate why these changes are not only necessary, but will become mandatory in the future. Virtual Instruction can be defined simply as instruction given through a live online video feed without the instructor being physically present, or in some cases, without the student being physically present. While Virtual Instruction is not new to education, there are new concepts being introduced to make Virtual Instruction even more accessible, more affordable, and of an even higher quality. The proposed Virtual Instruction model will open a discussion about the challenges of companies hiring well-trained employees with less student loan baggage, the challenges of schools attracting qualified industry professionals to teach animation courses at their campuses, and the challenges of students striking a balance between quality and affordability in animation programs. These challenges make for a very promising environment to implement the next phase of Virtual Instruction. The idea of implementing the Virtual Instruction model across time-zones will also be discussed. This presentation will have several examples of instructional tools developed by the presenter, including personal and student projects. These examples will give compelling evidence of the effectiveness of the Virtual Instruction model, which is the goal of the presentation.
Russell Pensyl
Northeastern University, USA
Title: Facial Recognition and Emotion Detection in Environmental Installation and Social Media Applications
Biography:
Russell Pensyl (MFA 88, BFA 85) is an American media artist and designer. His work maintains a strategic focus on communication, narrative, and user centric design processes for interactive and communication media. Pensyl is currently full Professor in the Department of Art+Design at Northeastern University. He held the post of Chair from 2010 till 2012. Previous post include, Director of Research and Graduate Studies at Alberta College of Art + Design, Director of the Interaction and Entertainment Research Center, Executive Vice Dean of the School of Art, Design and Media at Nanyang Technological University in Singapore, Chair of the Department of Digital Art and Design at Peking University. Pensyl’s current work includes the creation of location based entertainment several areas of technology in the application of content delivery in environmental spaces including facial recognition, positioning and localization, gesture recognition. Recently, research in the use of facial recognition technology, positioning and augmented reality annotation is resulting in commercially viable communication technologies as well as user centric, autonomously responsive systems using biometric data in interactive installations. In 2010, recent work explores the “subtle presence” autonomously responsive media in an interactive installation that presents a dynamic time lapse still-life painting that shifts subtly, caused by sensing personal characteristics of the viewer in the exhibition space. In 2011, this installation was featured in the International Sarajevo Winter Festival. In 2008 Pensyl’s mixed reality installation “The Long Bar” was a Curator Invited Installation into the SIGGRAPH Asia Synthesis – Curated Show/Art Gallery in Singapore. His exhibition credits include international exhibitions in China, USA, Japan, and Europe
Abstract:
Facial recognition technology is a growing area of interest, where researchers are using these new application for study in psychology, marketing and product testing and other areas. There are also application where the use of facial image capture and analysis can be used to create new methods for control, mediation and integration of personalized information into web based, mobile apps and standalone system for media content interaction. Our work explores the application of facial recognition with emotion detection, to create experiences within these domains. For mobile media applications, personalized experiences can be layered personal communication. Our current software implementation can detect smiles, sadness, frowns, disgust confusion, and anger. In a mobile media environment, content on a device can be altered, to create a fun, interactive experience, which is personally responsive and intelligent. By intersecting via direct communication between peer to peer mobile apps, moods can be instantly conveyed to friends and family – when desired by the individual. This creates a more personalized social media experience. Connections can be created with varying levels of intimacy, from family members, to close friends, out to acquaintances and further to broader groups as well. This technique currently uses an pattern recognition to identify shapes within a image field using Viola and Jones Open CV Haar-like features application [1], [2],[3] and a “feret” database [4] of facial image and support vector machine (LibSVM) [3] to classify the capture of the camera view field and identify if a face exists. The system processes the detected faces using an elastic bunch graph mapping technique that is trained to determine facial expressions. These facial expressions are graphed on a sliding scale to match the distance from a target emotion graph, thus giving an approximate determination of the user’s mood.
Jennifer Coleman Dowling
Framingham State University, USA
Title: Exploring Innovative Technology: 2D Image Based Animation with the iPad
Biography:
Jennifer Coleman Dowling is an experienced new media specialist, designer, educator, author, and artist. She holds an M.F.A. in Visual Design from the University of Massachusetts Dartmouth and a B.A. in Studio Art from the University of New Hampshire. Dowling is a Professor in the Communication Arts Department at Framingham State University in MA focusing on Integrated Digital Media. She has been dedicated to her teaching and professional work for over 25 years, and is the author of “Multimedia Demystified” published by McGraw-Hill. Her current line of research and practice is analog-digital approaches pertaining to media, fine art, and design.
Abstract:
Teaching computer animation techniques using innovative approaches was made possible for me with two consecutive technology grants. iPads were procured for inventive ways to learn digital animation and time-based media for artistic and commercial purposes. It assisted the students in the development of new visualization and production methods while concurrently providing the theoretical and practical instruction of fundamental animation techniques. This approach facilitated a more imaginative process for solving problems, discovering inspiration, creating concepts, and exchanging ideas so they could more fully develop their knowledge of the subject while building more versatile computer animation capabilities. Another advantage included the portability, accessibility, flexibility, and immediacy of using a mobile device as the primary course tool. Students used the iPad to sketch ideas, brainstorm, plan narrative and storytelling structures, conduct research, and present their work. They also had ongoing opportunities for collaborating with one another on group projects, exchanging ideas, discussing work, and giving/receiving feedback. Complementary tactics with iPads included: studying historical and contemporary figures in the animation field; sketching characters, scenes, and storyboards; manipulating timeline key frames and stage elements, and adjusting camera views; digitizing and editing audio tracks; and capturing and manipulating photography and video. Assignments focused on such subjects as kinetic typography, logo animation, introductory sequences for video and film, web-based advertisements, cartoon and character animation, animated flipbooks, and stop-motion techniques. This presentation will cover the goals and outcomes of this research, including student survey results, assessments, and animation examples.
Biography:
Professor David Xu is tenure associate professor at Regent University, specializing in computer 3D animation and movie special effects. He got MFA Computer Graphics in 3D Animation from Pratt Institute in NY. He has served as a senior 3D animator in Sega, Japan; a senior CG special effector in Pacific Digital Image Inc., Hollywood; and as a professor of animation in several colleges and universities where he developed the 3D animation program and curriculum. He has been a committee member of the computer graphics organization SIGGRAPH where he was recognized with an award for his work. He published the book Mastering Maya: The Special Effects Handbook invited by Shanghai People's Fine Arts Publishing House.
Abstract:
In this talk, Professor Xu will present an overview of the Maya Special Effects used for the post-productions. He will showcase some Maya Special Effects used for films, and share his thoughts on the roles of Maya special effects in movies and commercials. In particular, he will go depth into the Explosion Effect and the Splash Effect which he created for his published textbook, where the conceptualization, production process and effective solution to the animation projects will be explored. He will also demonstrate various Maya Special Effects techniques, for examples, how to create the bomb using the Particles Instancer; how to create the explosion and fire effects by applying the Dynamic Relationship Editors and Particle Collision Event Editor, Gravity and the Radial; how to create the ocean surface by applying the Soft Body; how to create the ocean splash effects by applying the Rigid Body, Particle System, Particle Collision Event Editor and Gravity.
Biography:
Will Kim is a Tenured Associate Professor of Art at Riverside City College and a Los Angeles based artist. Kim is a founder and director of RCC Animation Showcase. Kim received an M.F.A. in Animation from UCLA and a BFA in Character Animation from CalArts (California Institute of the Arts). Before teaching at RCC, he also taught at CalArts, Community Arts Partnership and Sitka Fine Arts Camp as a media art instructor. Kim’s work has showed in over 100 international film/animation festivals and auditoriums including Directors Guild of America (DGA) Theater, Academy of TV Arts and Sciences Theater, The Getty Center, The USC Arts and Humanities Initiative, and Museum of Photographic Arts San Diego. As an animation supervisor and a lead animator, Will has participated in various feature and short live action films that were officially selected for the New York Times’ Critic’s Pick, the United Nations’ climate change conference (Framework Convention), Los Angeles Film Festival, Tribeca Film Festival, and Cannes.
Abstract:
Animation is a form of fine arts. What’s more important than emphasizing what software to create characters’ movements is how well one can communicate his or her ideas and tell stories with honesty. In animation, the technology keeps changing all the time while fundamentals of drawing, painting, basic design, and animation principles never change. In pursuing Traditional Animation, there are digital compositing, special effects, and digital editing methods involved. In Pursuing 3D or Digital-2D animation, there are visualization and conceptualization that are often done by drawing or painting mediums. This lecture will discuss an animation filmmaking teaching and learning method that embraces originality and creative freedom in telling stories and expressing themselves while the students receive extensive opportunities to study digital animation techniques combined with traditional or/and experimental animation media.
Inma Carpe, Ed Hooks, Susana Rams
The Animation Workshop/VIA University College, Denmark and University Politechnic of Valencia, Spain.
Title: Animation & Neurocinematics*: The visible language of E-motion-S and its magical science.
Biography:
Inma Carpe works as visual development artist/ animator and teacher at the ALL, The Animation workshop in Denmark. She gives workshops and collaborate with other countries developing educational curriculums and studying Animation and affective neurosciences for self-development and communication, focusing on emotions and mindfulness based on productions. She eventually works at film festivals in Hollywood as production assistant. Her personal work in animation reflects an interest in collage, blending animation with fashion illustration, sciences and education. Her specialty in preproduction brought her to live in different countries working for short formats and independent studios.
Abstract:
We love movies because we like to jump from our “reality” to live a dream, a parallel universe that inspires us. We lodge for adventure, love, excitement, answers to quests. That’s the magic of cinema, make you believe what you see and over all, FEEL it. As Antonio Damasio said- “we´re feeling machines that think”. Such feelings come from the interpretation of the emotions in our bodies. Emotions are our universal language, the motivation of living, the fuel, the key of what makes a movie successful and truly an art piece that you will remember because It moves you, the secret, empathy. Animation, indeed, is a social emotional learning media, which goes beyond the limitations of live action movies due to the diversity of techniques and its visual plasticity capable to construct the impossible. Animators are not real actors but more like the midwife who brings the anima into aliveness and that requires knowing how emotions work. Ed Hooks as an expert in training animators and actors always remarks:-“emotion tends to lead action”-animators must understand this as well as the connections between thinking, emotions and physical action. I would like to expose how integrating Hooks advices and the emerging results of scientists like Talma Hendler, Gal Raz or Paul Ekman who study the science behind the scenes: the magic of Neurocinematics (Uri Hasson); can help any professional from the industry to be more aware of our performances and enhance the cinematic experience. Animation is a visual thinking and feeling media, which offers a promising unlimited arena, to explore and practice emotional intelligence and keep us interested in living fully aware and feeling new realities by loving, creating meaningful movies.
*Neurocinematics( Hasson) the neuroscience of cinema. Such studies reveal which brain areas and emotions related are affected when watching movies.
Benjamin Kenwright
Edinburgh Napier University, United Kingdom
Title: Character Animation using Genetic Algorithms
Biography:
Dr. Benjamin Kenwright is the part of the games technology group at Edinburgh Napier University. He studied at Liverpool and Newcastle University before moving on to work in the game industry and eventually joining the Department of Computing at Edinburgh Napier University. His research interests include real-time systems, evolutionary computation, and interactive animation. He is also interested in the area of physics-based simulations and massively parallel computing.
Abstract:
The emergence of evolving search techniques (e.g., genetic algorithms) has paved the way for innovative character animation solutions. For example, generating human movements `without' key-frame data. Instead character animations can be created using biologically inspired algorithms in conjunction with physics-based systems. While the development of highly parallel processors, such as the graphical processing unit (GPU), has opened the door to performance accelerated techniques allowing us to solve complex physical simulations in reasonable time frames. The combined acceleration techniques in conjunction with sophisticated planning and control methodologies enable us to synthesize ever more realistic characters that go beyond pre-recorded ragdolls towards more self-driven problem solving avatars. While traditional data-driven applications of physics within interactive environments have largely been confined to producing puppets and rocks, we explore a constrained autonomous procedural approach. The core difficulty is that simulating an animated character is easy, while controlling one is difficult. Since the control problem is not confined to human type models, e.g., creatures with multiple legs, such as dogs and spiders, ideally there would be a way of producing motions for arbitrary physically simulated agents. This presentation focuses on evolutionary algorithms (i.e., genetic algorithms), compared to the traditional data-driven approach. We explain how generic evolutionary techniques are able to emulate physically-plausible and life-like animations for a wide range of articulated creatures in dynamic environments. We help explain the computational bottlenecks of evolutionary algorithms and possible solutions, such as, exploiting massively parallel computational environments (i.e., graphical processing unit (GPU)).
Daniel N. Boulos
University of Hawai’i Manoa,USA
Title: Abstraction and Stylized Design in 3D Animated Films: an Extrapolation of 2D Animation Design
Biography:
Daniel Boulos completed his Masters in Educational Technology at the University of Hawaii and his Bachelor of Fine Arts at California Institue of the Arts. Mr. Boulos has worked professionally as an animator for Walt Disney Studios, DreamWorks Animation and Warner Brothers Feature Animation. He has been teaching animation in higher education for 20 years and recently completed his animated film, “The Magnificent Mr. Chim”. He is a lifetime member of the Animation Guild and a member of ASIFA (Association Internationale du Film d’Animation). He has presented and been published in the United States and abroad. His animation work appears in more than ten feature animated films and numerous commecials and animated shorts. He is currently writing a comprehensive book on animation processes
Abstract:
Stylization is at the heart of 2D animation design and is only recently being more fully explored in 3D animated films. In the early days of 3D animation the push for realism in lighting, rendering and deformations displaced a pursuit of stylization in the quest to expand the capabilities of computer graphics technology. With those technical problems solved, 3D animation has more recently embraced stylization in design and character movement. Stylization also can be interpreted by some as playfulness and, “play is at the heart of animation” (Powers, p52, 2012). Nature can be seen as an “abstract visual phenomenon” (Beckman, Ezawa p101, 2012) and the portrayal of hyper realistic human characters in 3D animation can lead to the alienation of an audience, as they may not accept them as being real (Kaba 2012). It is the ability of animation to “break with naturalistic representation and visual realism” (Ehrlich, 2011) that is observed as one of the strengths of the art. This paper discusses the implications of stylized design and its use in 3D animated films, while drawing important references to traditional hand-drawn animation stylization processes that pose a challenge to modern 3D animation studios.
Anandh Ramesh
CEO, Voxel Works Pvt. Ltd., India
Title: Facial Animation Through Reverse Engineering Of Actions To Thought Process
Biography:
Anandh Ramesh is an Honors graduate of 3D Animation and VFX from Vancouver Film School with a Masters in Computer Science (Computer Graphics) from The University of Texas at Arlington. He is the CEO of Voxel Works Pvt. Ltd, a premier Animation Training Institution in Chennai, India. He has published a course on 3D Stereoscopy for Digital Tutors, and has published papers in several National and International conferences. He is a recipient of the Duke of Edinburgh International Standard for Young People, the Bharat Excellence Award and the Rashtrya Ratan.
Abstract:
I propose a method where facial animation for characters can be derived as a result of reverse engineering from the final action on the storyboard to the thought train driving the action. For this process, we classify actions into conscious, subconscious and unconscious actions, and derive the lesser obvious subconscious and unconscious parts leading to the conscious action. We begin by analyzing the situation at hand, and how it applies to each character in it. Then we use the storyboards to understand the primary action of the character. Here we study the face of the character, i.e., his expression, and the body language, i.e., the line of action and the pose. Then we proceed to analyze the possible references to the past of the character that could drive the action. Here, we try to reason things he might have seen or heard and his own internal reasoning that lead to his interpretation of the situation and the consequent action. Finally we derive the inner monologue of the character that drives the action. Once we finish the reverse engineering from the action in the storyboard to the thoughts and emotions, we map the eye darts, blinks, eyebrow movement, leading actions and its required anticipations within the time frame stipulated by the storyboard. This method of reverse engineering-based animation results in greater cohesive acting throughout a film, and creates greater connect with the audiences.
Chang, Yen-Jung
Department of Graphic Art and Communication, National Taiwan Normal University
Title: A Framework of Humorous and Comic Effects on Narrative and Audiovisual Styles for Animation Comedy
Biography:
Yen-Jung Chang was born in Taipei, Taiwan in 1972. He studied in the School of Film and Animation in Rochester Institute of Technology, USA. After graduation, he worked as an animator in Buffalo and Los Angeles. From 2006, Yen-Jung was granted a scholarship from the Ministry of Education, Republic of China (Taiwan) to study for the PhD degree in the School of Creative Media, RMIT University, Australia. He obtained the PhD in 2009 and went back to Taiwan to teach in Universities. He has accomplished four animated short films as a film director. He is also a researcher who focuses on the area of animation theories and practices. He has actively participated in the academic events and film festivals. Now Yen-Jung Chang is teaching in Department of Graphic Art and Communication, National Taiwan Normal University, Taipei, Taiwan.
Abstract:
Humorous and comic elements are essentially used to entertain the audiences and key to success in box office. However, little research was found to systematically illustrate the importance and effects of these elements due to the complexity of subjective judgment during the film production. Hence, this research aims to analyze the narrative and the audio-visual styles for promoting the effects of animation comedy and conclude into a framework. The elements and features for evaluating an animated film are formed based on the surveys of experts’ opinions from animation industry and academy. A consensus of experts’ opinions on weights and ratings are mathematically derived using fuzzy Delphi and AHP methodology. The result indicates that reversal, exaggeration and satire are regarded to be the most significant narrative features in an animated film. More specific to the application of audiovisual elements, characters’ acting, character design and the sound are perceived prominently important. Hence, based on the preliminary structure obtained by the survey, a framework of audiences’ reception on the humorous and comic effects of animated films is established. This framework illustrate the process that audiences percept and react the narrative and audiovisual elements of animation comedy. The testified observation and evaluation for this framework in theaters can be further studied.
Omar Linares
Emily Carr University of Art and Design, Canada
Title: Criteria to Define Animation, a Review of the Definition in the Advent of Digital Moving Images
Biography:
Omar Linares graduated with a major in Cultural Practices from Emily Carr University of Art and Design in Vancouver, Canada. His studies revolve around animation, documentary film, and international cinema. He will be joining the Masters in Film Studies at Concordia University in Montreal, in September of 2015.
Abstract:
Animation has become ubiquitous, from cartoons to special effects, from commercials to information visualization; nonetheless, its own definition is more elusive than ever. Digital imaging has blurred the line between what is animated and what is a reproduction of recorded movement, rendering previous definitions of frame-by-frame production and non-recorded movement seemingly obsolete. Moreover, digital automation has also contested the authorship of moving images. In this light, can animation be defined? Rather than defining animation by what it is not, as the illusion of motion that is not recorded, the author reviews constitutive traits common to all moving images, like intervallic projection; those absent from animation, like reconstitution of movement; those specific to animation, like artificial change in positions; and notions of the index and digital authorship to distinguish animation as a particular type of moving image.
These considerations are arranged in a set of criteria with which to define animation by what it is, positively. Additionally, while the emphasis is on digital moving images, these criteria, are applicable to analogue techniques of animation. Ultimately, the author’s examples point to a continuity with old techniques and definitions, a continuity that extends to moving image practices outside of either animation or cinema.
Matthew A. Tovar
University of the Incarnate Word, USA
Title: Robust Animation Instruction for the “Uneducated†Freshman
Biography:
Currently serving as an Instructor at the University of the Incarnate Word in San Antonio Texas. Prior to teaching, I have worked at various studios such as Sony Imageworks, Sony Computer Entertainment of America, Naughty Dog & Infinity Ward. Some of my professional projects include Green Lantern, The Amazing Spiderman, Uncharted 2, Uncharted 3 and The Last of Us. Most recently served as a senior animator at Sony Computer Entertainment of America in San Diego working on an AAA PS4 title.
Abstract:
Incoming freshman face a large reality check entering animation. Animation to most is fun, exciting and offers immersion in the entertainment world that is considered glamorous. Not only is the work challenging and time consuming, it requires intense attention to detail and constant practice and improvement. Instant gratification is not the norm. The comprehension of the numbers of individuals, talents and workload involved in the animation process is only the beginning of the learning curve. The animation industry seeks students not only technically savvy, but dedicated, patient and most importantly, able to work well with others, work hard, long hours and understand their role and responsibilities in the production. Knowing the principles of animation incorporates a strong foundation in the field, but being able to apply them to their animation is key, along with learning other technical aspects such as how & why to use the graph editor, timeline, weighted or unweighted tangents, or broken tangents. This presentation will outline the freshman animation course developed, along with various teaching techniques and tools with some preliminary outcomes and lessons learned.
Biography:
Tien-Tsin Wong is known with his pioneer works in Computational Manga, Image-based Relighting, Ambient Occlusion (Dust Accumulation Simulation), Sphere Maps, and GPGPU for Evolutionary Computing. He graduated from the Chinese University of Hong Kong in 1992 with a B.Sc. degree in Computer Science. He obtained his M.Phil. and Ph.D. degrees in Computer Science from the same university in 1994 and 1998 respectively. He was with HKUST in 1998. In August 1999, he joined the Computer Science & Engineering Department of the Chinese University of Hong Kong. He is currently a Professor. He is also the director of Digital Visual Entertainment Laboratory at CUHK Shenzhen Research Institute (CUSZRI). He is an ACM Senior Member and a HKIE Fellow. He received the IEEE Transactions on Multimedia Prize Paper Award 2005 and the Young Researcher Award 2004. He was the Academic Committee of Microsoft Digital Cartoon and Animation Laboratory in Beijing Film Academy, visiting professor in both South China University of China and School of Computer Science and Technology at Tianjin University. He has actively involved (as Program Committee) in several international prestigious conferences, including SIGGRAPH Asia (2009, 2010, 2012, 2013), Eurographics (2007-2009, 2011), Pacific Graphics (2000-2005, 2007-2014), ACM I3D (2010-2013), ICCV 2009, and IEEE Virtual Reality 2011. His main research interests include computer graphics, computational manga, computational perception, precomputed lighting, image-based rendering, GPU techniques, medical visualization, multimedia compression, and computer vision.
Abstract:
Traditional manga (comic) and anime (cartoon) creation are painstaking processes. Even computers are utilized during the production, they are mainly utilized as a naive digital canvas. With the increasing computing power and decreasing cost of CPU & GPU, more computing resource can be exploited cost-effectively for intelligent and semi-automatic creation of aesthetics content. In this talk, we present our recent works on computational manga and anime, in which we aim at facilitating various production steps with the advanced computer technologies. Manga artists usually draw the backgrounds based on real photographs. Such background preparation is tedious and time-consuming. Some artists already make use of simple computer techniques, such as halftoning, to convert a given color photograph into B/W manga. However, the resultant mangas are so inconsistent in style and monotonous in pattern due to the single halftone screen. I will present a way to turn a color photograph into manga while preserving the color distinguishability in the original photo, just like what traditional manga artists do. On the other hand, there is a trend of migrating manga publishing from the traditional paper medium to the digital domain via the screen of portable devices. There are companies doing colorization for B/W mangas (of course, in a painstaking manual fashion) to allow users to read color manga on the portable devices. I will present a computer-assisted method to colorize an originally B/W manga into a color version by simply scribbling on the B/W version. Lastly, I will present our latest work on automatically conversion of 2D hand-drawn cel animations to stereoscopic ones. As it is infeasible to ask cel animators to draw stereo-frames, there is a rare number of stereo cel animation produced so far. I will present a method to exploit the scarce amount of depth cue left in the hand-drawn animation, in order to synthesize the temporal-consistent and visually plausible stereoscopic cel animation.
David M Breaux Jr.
Sr Character / Creature Animator & Animation Instructor, USA
Title: Animation from Motion Capture - Pitfalls, Potential and Proper uses...
Biography:
I have completed work on over 30+ projects and counting, during my time in the industry. I have a combined 16+ years of experience in Film, Games, Commercials and Television. I specialize in highly detailed character and creature performance animation, using both key framed and motion captured data or a hybrid of the two where appropriate. My professional film experience includes a diverse list of animal, character and creature types encompassing the fantastic, the realistic and the cartoony. My most recent released projects were for Blur Studios on Tom Clancy’s: The Division pre-release trailer and Halo: The Master Chief Collection cinematic remastering.
Abstract:
Motion capture is the practice of capturing the movements of a chosen subject, most often a human subject. motion capture has progressed greatly through many iterations of technology through the years. The mystery’s that remain seem to be when and how to use it..That statement is a little audacious I must admit, but there is good reason. Quite often motion capture in both games and film is viewed as a means to a quicker and cheaper solution. What is never taken into consideration is the inevitability of a director to change their mind, request adjustments and the ever popular dirty mo-cap data received from the supplier. These can often take as much time to repair, change or adjust and can be quite monotonous and taxing on the artists assigned the job. This isn’t to say mo-cap doesn’t have its place especially in film where realism in VFX laden movies is oddly in contrast to the ever less realistic scenarios the characters are thrust into. Motion capture is used very often in video games with the intention of adding to the realism of the game. What we end up with a lot of times is very weightless feeling characters… Why is that. The root of the problem is how the motion capture is being used, and the lack of cues that the eye and ultimately the human brain is using to register visual weight. With the hardwares technical ability to allow animators to include more and more details in animation of characters this becomes less of an issue, but understanding exactly what makes something look weightless informs our understanding of the best methods to use in our creations.
- Imaging and Image Proceesing
Session Introduction
Ching Y. Suen
Director, Centre for Pattern Recognition and Machine Intelligence
Concordia University, Canada
Title: Judging Female Facial Beauty by Computer
Biography:
Dr. Ching Y. Suen is the Director of CENPARMI and the Concordia Chair on AI & Pattern Recognition. He received his Ph.D. degree from UBC (Vancouver) and his Master's degree from the University of Hong Kong. He has served as the Chairman of the Department of Computer Science and as the Associate Dean (Research) of the Faculty of Engineering and Computer Science of Concordia University. Prof. Suen has served at numerous national and international professional societies as President, Vice-President, Governor, and Director. He has given 45 invited/keynote papers at conferences and 195 invited talks at various industries and academic institutions around the world. He has been the Principal Investigator or Consultant of 30 industrial projects. His research projects have been funded by the ENCS Faculty and the Distinguished Chair Programs at Concordia University, FCAR (Quebec), NSERC (Canada), the National Networks of Centres of Excellence (Canada), the Canadian Foundation for Innovation, and the industrial sectors in various countries, including Canada, France, Japan, Italy, and the United States. Dr. Suen has published 4 conference proceedings, 12 books and more than 495 papers, and many of them have been widely cited while the ideas in others have been applied in practical environments involving handwriting recognition, thinning methodologies, and multiple classifiers. Dr. Suen is the recipient of numerous awards, including the Gold Medal from the University of Bari (Italy 2012), the IAPR ICDAR Award (2005), the ITAC/NSERC national award (1992), and the "Concordia Lifetime Research Achievement" and "Concordia Fellow" Awards (2008 and 1998 respectively). He is a fellow of the IEEE (since 1986), IAPR (1994), and the Academy of Sciences of the Royal Society of Canada (1995). Currently, he is the Editor-in-Chief of the journal of Pattern Recognition and an Adviser or Associate Editor of 5 journals.
Abstract:
Beauty is one of the foremost ideas that define human personality. In this talk, various approaches to the comprehension and analysis of human beauty are presented and the use of these theories is outlined. Each set of theories is translated into a feature model that is tested for classification. Selecting the best set of features that result in the most accurate model for the representation of the human face is a key challenge. This research introduces the combined use of three main groups of features for classification of female facial beauty, to be used with classification through support vector machines. It concentrates on building an automatic system for the measurement of female facial beauty. The approach used is one of analyses of the central tenets of beauty, the successive application image processing techniques, and finally the usage of relevant machine learning methods to build an effective system for the automated assessment of facial beauty. Plenty of examples will be illustrated during the talk.
Frédéric Cointault
Agrosup Dijon – UMR Agroécologie, France
Title: Image acquisition and processing for Precision Farming applications
Biography:
Frédéric Cointault gets his Ph-D in Instrumentation and Image Computing (3I) of the University of Burgundy in 2001, and his "accreditation to supervise research" in 3I in 2010. He works since 2002 as an associate professor at Agrosup Dijon, one of the seven French Agronomic and Agri Food Higher Educational Institution. He is the Head of an international Master Degree in Biosystems Engineering and Economics, and is the national co-head of a Joint Technological Network based on Agricultural Engineering and ICT for Agroecology. His research are turned on the development of image acquisition tools (3D, high speed, NIR …) and image processing methods (color/texture information, motion estimation …) for plant phenotyping, determination of diseases on leaves and grapes, spraying and spreading characterization, and more generally for precision agriculture and viticulture. He is also member of the ISPA (International Society of Precision Agriculture) and of the IFS (International Fertiliser Society).
Abstract:
Initially developed for technical industrial sectors such as medicine or aeronautics, imaging technics are more and more used since 30 years in agriculture and viticulture. The development of acquisition tool and the decreasing of the calculation time allowed using imagery in laboratory under controlled conditions. At the beginning of the 90’s, the concept of Precision Farming has been developed in the USA, considering a field as a heterogeneous area needing different input in terms of fertiliser or protection product. In the same time, the aperture of the GPS system for civil applications has allowed the development of remote sensing domain. Combining GPS information and imagery conducted also to the emergence of proxy-detection applications, in agriculture and viticulture domains, in order to optimize crop management. A localized crop management needs the use of new technologies such as computing, electronics and imaging, and the conception of a proxy-detection system is motivated by the need of better resolution, precision, temporality and lower cost, compared to remote sensing. The use of computer vision techniques allows obtaining this information automatically with objective measurements compared with visual or manual acquisition. The main applications covered by the computer vision in agriculture are tied to the crop characterization (biomass estimation, leaf area, volume, height of the crop, disease determination etc), the aerial or root phenotyping in the fields or in specific platforms and the understanding of spraying and spreading processes. This presentation will explain the different imaging systems used to characterize the previous parameters, in 2D or 3D. It will also give some details on the dedicated image processing methods developed, related to motion estimation, focus information, pattern recognition and multi –hyper spectral data.
Jorge Sánchez
Assistant Researcher, National Scientific and Technical Research Council
Title: Image understanding at a large-scale: from shallow to deep and beyond
Biography:
Dr. Jorge Sánchez is an Assistant Researcher at the National Scientific and Technical Research Council (CONICET). He received his Dr. of Engineering degree from the National University of Córdoba (UNC), Argentina, in 2012. He is currently an Adjoin Professor at the UNC's Faculty of Mathematics, Astronomy and Physics. Most of his research is focused on developing models and representation that can be applied to large-scale image analysis problems. He participated in the ImageNet large-scale Visual Recognition Challenge (ILSVRC) in 2010 and 2011, obtaining and honourable mention and first place, respectively. His research interest includes image classification, object recognition and large-scale problems in computer vision.
Abstract:
The analysis and understanding of images at a large-scale is a problem which has received an increasing attention in the past few years. The rapid growth on the number of images and videos online and the availability of datasets consisting on hundreds of thousands or even million of manually annotated images, impose exciting new challenges to the computer vision community as a whole. One of the fundamental problems on visual recognition, i.e. the way we represent the images and its content, is witnessing a paradigm shift towards a new class of models trying to exploit the vast amount of available data as well as the fast growth and widespread use of high-performance computing systems.
In this talk, I will discuss different models that has been proposed in the computer vision literature to encode the visual information over the past few years, from the early shallow models to the more recent deep architectures. I will focus on the large-scale image annotation problem, i.e. the task of providing semantic labels to images when the number of those images and/or the number of possible annotations is very large, and its connection with other problems of growing practical interest.
Ines Aparecida
Sao Paulo State University, Brazil
Title: Multi-Scale Local Mapped Pattern for Spoof Fingerprint Detection
Biography:
Dr. Ines Boaventura graduate at Mathematics from Sao Paulo State University , UNESP, Brazil, master’s at Computer Science and Computational Mathematics and Ph.D at Electrical Engineering from University of Sao Paulo (USP). She has experience in Computer Science, focusing on Graphical Processing (Graphics), and acting on the following subjects: Biometrics, Image Processing, and Computer Vision. She is a full-time professor and head of the Department at Department of Computer Science and Statistics at UNESP, campus of Sao Jose do Rio Preto, Sao Paulo, Brazil. In 2011-2012 she was a visiting researcher at PRIP Laboratory –CSE –Michigan State University.
Abstract:
In this talk we will address the problem of detecting spoofings using image processing and pattern recognition techniques. Within this context, we have a widely used texture extractor, the Local Binary Pattern (LBP) proposed in 1996. In 2014, multi-scale versions of this method were presented, referred to as MSLBP (Multi-Scale Local Binary Pattern). In the same year the LMP (Mean Local Mapped Pattern) technique, equally based on LBP, was also introduced. These new techniques offered quite promising results. We will show a new technique joining both previous methods, that is, the LMP and the MSLMP, herein referred to as MSLMP (Multi-Scale Mean Local Mapped Pattern). The proposal of this new approach is to attenuate noisy actions often occurring in digital images with the use of applications in charge of smoothing high frequencies found in the neighborhood of a pixel. Forgeries are detected through the analysis of micropatterns extracted from fingerprint images. In the proposed method, the micropatterns are responsible for representing the most abstract features, which describe properties differing forged from genuine fingerprints. The experiments carried out so far suggest that the technique presented provides detections with higher performance than the results presented in the state-of-the-art research in the specialized scientific literature.
Walid El-Shafai
Menoufia University, Egypt
Title: Joint Adaptive Pre-processing Resilience and Post-processing Concealment Schemes for 3D Video Transmission
Biography:
Walid El-Shafai was born in Alexandria, Egypt, on April 19, 1986. He received the B.Sc degree in Electronics and Electrical Communication Engineering from Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, Egypt in 2008 and M.Sc degree from Egypt-Japan University of Science and Technology (E-JUST) in 2012. He is currently working as a Teaching Assistant and Ph.D. Researcher in ECE Dept. FEE, Menoufia University. His research interests are in the areas of Wireless Mobile and Multimedia Communications, Image and Video Signal Processing, 3D Multi-view Video Coding, Error Resilience and Concealment Algorithms for H.264/AVC and H.264/MVC Standards. He is an Android Certified Application Developer and Android Certified Application Engineer, Android Certified Application Trainer Android ™ (Advanced Training Consultants) ATC. He is a reviewer in different international journals like IEEE, Springer, Elsevier and others. He is currently a lecturer at Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, Egypt.
Abstract:
3D Multi-View Video (MVV) is multiple video streams shot by several cameras around a single scene simultaneously. In Multi-view Video Coding (MVC), the spatio-temporal and interview correlations between frames and views can be used for error concealment. 3D video transmission over erroneous networks is still a considerable issue due to restricted resources and the presence of severe channel errors. Efficiently compressing 3D video with low transmission rate, while maintaining a high quality of received 3D video, is very challenging. Since it is not plausible to re-transmit all the corrupted Macro-Blocks (MBs) due to real time applications and limited resources. Thus it is mandatory to retrieve the lost MBs at the decoder side using sufficient post-processing schemes, such as Error Concealment (EC). Error Concealment (EC) algorithms have the advantage of improving the received 3D video quality without any modifications in the transmission rate or in the encoder hardware or software. In this presentation, I will explore a lot of and different Adaptive Multi-Mode EC (AMMEC) algorithms at the decoder based on utilizing various and adaptive pre-processing techniques, i.e. Flexible Macro-block Ordering Error Resilience (FMO-ER) at the encoder; to efficiently conceal and recover the erroneous MBs of intra and inter coded frames of the transmitted 3D video. Also, I will present extensive experimental simulation results to show that our proposed novel schemes can significantly improve the objective and subjective 3D video quality.
Nishant Shrivastava
Jaypee University, India
Title: Content Based Image Retrieval: Approaches, Challenges & Future Directions
Biography:
Dr. Nishant Shrivastava is working as Assistant Professor in the department of computer science and Engineering, Jaypee University Anoopshahr, India. He has been awarded PhD in computer science and Engineering from Jaypee University of Engineering and Technology, Guna, India. His research work is based on development of efficient techniques for Image Retrieval and Classification using semantic visualization of image. He has teaching and research experience of about 11 years in various reputed organizations. He is a life time member of ISTE and IEEE. Focussing on Image Retrieval research, Dr. Shrivastava has published many papers in reputed international journals and conferences. He has been appointed as reviewer and member of editorial board of various reputed international conferences and journals.
Abstract:
Image retrieval and classification is the major field of research in the area of image processing and computer vision. Early image retrieval systems search the images based on keyword found in their surrounding text. These Text Based Image Retrieval Systems (TBIRs) require manual annotation of images in advance. However, annotation of images is a very tedious task that requires a lot of time and often produces misleading results. To overcome this limitation of TBIR, visual content of the images is employed to search the image. System utilizing this concept for searching, navigating and browsing images from large image databases is termed as Content Based Image Retrieval (CBIR) system. A CBIR system is more successful and close to the human perception as it can search the similar images based on the visual content of a given query, image or sketch. Inherently, a typical CBIR system involves task like query formulation, pre-processing, feature extraction, multidimensional indexing, similarity computation, relevance feedback and output similar images as per user requirement. This presentation will provide a deep insight in to the various tools and techniques used in CBIR system. The conceptualization and implementation of each of the CBIR task together with the implementation of complete system, starting from simple to most complex ones, will be discussed in detail. Further, my contribution in the field of image retrieval and classification with verified results and example query will be presented and compared with the existing state-of-the-art techniques. Finally, future work, problems and challenges in developing an efficient CBIR system will be discussed with their suggested solutions. The main goal of the presentation is to arouse the interest of audience and attract the potential researchers towards the fascinating field of image retrieval.
- Simulation and Modeling
Session Introduction
Paul Fishwick
The University of Texas at Dallas, USA
Title: Leveraging the Arts for Modeling & Simulation
Biography:
Paul Fishwick is Distinguished University Chair of Arts and Technology (ATEC), and Professor of Computer Science. He has six years of industry experience as a systems analyst working at Newport News Shipbuilding and at NASA Langley Research Center in Virginia. He was on the faculty at the University of Florida from 1986 to 2012, and was Director of the Digital Arts and Sciences Programs. His PhD was in Computer and Information Science from the University of Pennsylvania. Fishwick is active in modeling and simulation, as well as in the bridge areas spanning art, science, and engineering. He pioneered the area of aesthetic computing, resulting in an MIT Press edited volume in 2006. He is a Fellow of the Society for Computer Simulation, served as General Chair of the Winter Simulation Conference (WSC), was a WSC Titan Speaker in 2009, and has delivered over 16 keynote addresses at international conferences. He is Chair of the Association for Computing Machinery (ACM) Special Interest Group in Simulation (SIGSIM). Fishwick has over 230 technical papers and has served on all major archival journal editorial boards related to simulation, including ACM Transactions on Modeling and Simulation (TOMACS) where he was a founding area editor of modeling methodology in 1990. He is on the editorial board of ACM Computing Surveys.
Abstract:
Since its inception, computer graphics has played a major role in several areas such as computer-aided design, game development, and computer animation. Through the use of computer graphics, we enjoy artificial realities and the ability to draw figures within a flexible electronic medium. Computer simulation in computer graphics is generally construed to be one where the simulation is used to achieve realistic behavioral effects. But what if the naturally art-based design approaches in graphics could be used to visualize and manipulate the mathematical models used as the basis of simulation? This direction suggests that graphics, and the arts, can affect how we represent complex models. I’ll present approaches used in our Creative Automata Laboratory to reframe models as works of art that maintain an aesthetic appeal, and yet are highly functional, and mathematically precise.
Leonel Toledo
Instituto Tecnológico de Estudios Superiores de Monterrey Campus Estado de México
Title: Level of Detail for Crowd Simulation
Biography:
Leonel Toledo recieved his PhD from Instituto Tecnológico de Estudios Superiores de Monterrey Campus Estado de México in 2014, where he currently is a full-time professor. From 2012 to 2014 he was an assistant professor and researcher. He has devoted most of his research work to crowd simulation and visualization optimization. He has worked at the Barcelona Supercomputing Center using general purpose graphics processors for high performance graphics. His thesis work was in Level of detail used to create varied animated crowds. His research interests include crowd simulation, animation, visualization and high-performance computing.
Abstract:
Animation of crowds and simulation finds applications in many areas, including entertainment (e.g. animation of large numbers of people in movies and games), creation of immersive virtual environments, and evaluation of crowd management techniques. Interactive virtual crowds require high-performance simulation, animation and rendering techniques to handle numerous characters in real-time. These characters must be believable in their actions and behaviors. The main challenges are to remove the least perceptible details first, to preserve the global aspect of at best and meanwhile, significantly improve computation times. We introduce a level of detail system which is useful for varied animated crowds, capable of handling several thousands of different animated characters at interactive frame rates. The system is focused on providing rendering optimization, and is extended to build more complex scenes. This level of detail systems allows us to incorporate physics to the simulation and modify the animation of the agents as forces are applied to the models in the environment, avoiding rendering and simulation bottlenecks as much as possible. This way it is possible to render scenes with up to a quarter million characters in real time at interactive frame rates.
Roland Geraerts
Utrecht University, Netherlands
Title: Crowd simulation - A computational model of human navigation
Biography:
Abstract:
A huge challenge is to simulate tens of thousands of virtual characters in real-time where they pro-actively and realistically avoid collisions with each other and with obstacles present in their environment. This environment contains semantic information (e.g. roads and bicycle lanes, dangerous and pleasant areas), is three-dimensional (e.g. contains bridges where people can walk over and under as well) and can dynamically change (e.g. a bridge partially collapses or some fences are removed). We show how to create a generic framework centered around a multi-layered navigation mesh and how it can be updated dynamically and efficiently for such environments. Next, we show how (groups of) people move, avoid collisions and coordinate their movements, based on character profiles and semantics. We run our simulations in realistic environments (e.g. soccer stadiums or train stations) and game levels to study the effectiveness of our methods. Finally, we demonstrate our software package that integrates this research. Why would we need to simulate a crowd? The results can be used to decide whether crowd pressures do not build up too much during a festival such as the Lode Parade; to find out how to improve crowd flow in a train station; to plan escape routes for use during a fire evacuation; to train emergency personnel to deal with evacuation scenarios; to study a range of scenarios during an event; or to populate a game environment with realistic characters. After this presentation you'll understand why state-of-the-art crowd simulations need a more generic and efficient representation of the navigable areas, why speed and extendability is obtained by splitting up the simulation in at least five different levels, why we need a paradigm shift from graph-based to surface-based navigation, and why a path planning algorithm should NOT compute a path.
McArthur Freeman
University of South Florida, USA
Title: Starting at the Simulation: Learning from Digital Tools and Hybrid Processes
Biography:
McArthur Freeman, II is a visual artist and designer who creates work that explores hybridity and the construction of identity. His works have ranged from surreal narrative paintings and drawings to digitally constructed sculptural objects and animated 3D scenes. His most recent works combine three interrelated emerging technologies: digital sculpting, 3D scanning, and 3D printing. Freeman’s work has been published in Nka Journal of Contemporary African Art and has been exhibited in over 50 group and solo shows within the United States. Freeman is currently an Assistant Professor of Video, Animation, and Digital Arts at the University of South Florida. Prior to his appointment at USF, Freeman taught at Clarion University, Davidson College, and North Carolina State University. He has also taught Drawing at the Penland School of Crafts. Freeman earned his BFA degree in Drawing and Painting from the University of Florida. He received his MFA from Cornell University, with a concentration in Painting. He also holds a Master of Art and Design from North Carolina State University in Animation, New Media, and Digital Imaging, which he received in 2008.
Abstract:
Much of CG technology is based on simulations of real-world practices. With the ability to paint with pixels, sculpt with polygons, render from virtual cameras, and digitally fabricate 3D forms, many new artists are increasingly meeting disciplines for the first time at their digital simulations. Furthermore, the digital environment often facilitates the integration of multiple disciplines and hybrid practices that are not inherent in their analog counterparts. This presentation will discuss the potential for the use of digital tools to address traditional processes for both learning and new hybrid practices. What can we learn from the conventions and philosophies, embedded in the software? How can we effectively integrate this technology into traditional arts courses without undermining the established disciplines? In what ways can we leverage hybrid practices for deeper understanding of the crafts involved?
Biography:
OMAR KHAN is currently working as an Engineering Manager for an industrial and commercial firm. In his prior capacities, Omar served at various defense and commercial companies including United Defense, BAE Systems, MAV6 and Curtiss Wright where his roles included research and development, systems engineering, warfare and operations analysis, product management and international business development. Mr. Khan has authored several technical publications in the areas of modeling and simulation for naval weapon systems and holds patents in the same field. He received a Bachelor of Science degree in electrical engineering from the University of Engineering and Technology Lahore, a master of science in electrical engineering from Cleveland State University and an MBA from the University of Minnesota, Twin Cities.
Mr. Huang has over 40 years of engineering experience in academia, electro-hydraulic systems, sensor systems, servo systems, communication systems, robotics, system integration, ordnance systems, and modeling and simulation. Mr. Huang has worked on many naval and army weapon systems. Mr. Huang has served as a consultant for many industries in the area of sensor systems, test equipment, medical devices, and ordnance systems. Mr. Huang is a former artillery officer. He has co-authored two books and over 60 technical publications. Mr. Huang has taught technical courses in different countries. Mr. Huang also holds several US and international patents that ranging from devices, software, to systems. He received MSEE and Ph.D. degrees from University of Minnesota.
Abstract:
The use of computer generated high quality real-time video provides engineers/scientists/electronic game designers a powerful tool in so many applications that even the sky is no longer the limit. The advent of micro and nanoelectronics further enables complicated devices to be put into smaller, inexpensive, and robust packages. During the last few years, smaller video-image-based devices have been installed in land-based vehicles to enhance driving comfort, convenience, and safety. These include navigational aids, GPS, collision avoidance devices, surround-view systems and many others. The proliferation of these devices is mainly due to the relatively inexpensive and short life span of land vehicles compared to that of airplanes (and submarines). The authors in the past have developed a concept to aid helicopter pilots to land their craft when it is not possible to use the out-of-the-window view for a safe landing. This paper works further on the development of an aid for landing on a moving platform such as a shipboard heliport. For landing on a shipboard platform, in addition to the obstacles of water spray and mist (due to sea state conditions), frequent fog, and other weather related elements, a moving platform with six degrees of motion (three linear and three angular) creates even more challenges for the pilot. This paper provides a potential solution to the problems listed above. According to the analysis and preliminary computer simulation, the proposed landing aid may even have the potential to become an autonomous landing system and could be used in unmanned aerial vehicles as well.
Key words: real-time, computer generated video image, operator aid, and autonomous systems.
Aditya Tuknait
Sunovatech Infra Pvt Ltd, India
Title: Realistic Human and Traffic Behaviour Simulation in 3D Visualization.
Biography:
Aditya has been a professional 3D Designer since 2000. Aditya has worked on many games for gaming giants like EA, 2K, Disney famous titles for e.g. Need for Speed, Battle Forge, Harry Potter, Burnout Paradise etc. Currently he is a Deputy Director in Sunovatech Infra Pvt Ltd, India. Over the past 5 years, Aditya has delivered more than 140 projects related to 3D, Virtual Reality, Visualization and Simulation for Infrastructure and Transportation for 8 different countries. He is also working on computer games for engineering students for the University of Qatar. At Sunovatech Infra pvt ltd he leads a team of around +200 artist to create stunning 3d visualization and developing pc & mobile games. His projects are entertaining as well connect the viewer to interact with the project.
Abstract:
There had been an emphasys of simulation tools for transportation industry since early 80’s wheras on pedestran movements several studies and models had been researched since 90’s. Today there are tools that can provide the methatical analysis of the behaviours and pridections regarding the proposed development. These methatical enterpretations can only be understood by specialised Transport planners or Engineers, Whereas most critical decisions regarding any proposed development is the virtue of Political and Public will. The need for simplifying the Methatimatics and converging into a simplified visual medium that can be undersood by Public and politiations to asscess the impact is the reason behind the development of the algorithem that defines this paper. The raw mathematical outputs from the traffic simulations are converted to high quality 3D visualisation using a Virtual Reality rendering processor. Traffic simulation software concentrates on mathematical accuracy of the traffic behaviour rather than realistic and accurate visualisation of the traffic and its surroundings. This is primarily due to the inability of existing software to handle detailed, complex 3D models and structures in the simulation environment. This technology (VR Platform) is currently under the exclusive IP of Sunovatech and is used as the core part of Visualisation process wherein thouans of vehicles and pedestrians are animated as an automated process. Using the VR platform a highly realistic and accurate simulation of vehicles, pedestrians and their traffic infrastructure such as signals and buildings can be achieved. This technology offers decision makers, the traffic engineer and general public a unique insight into traffic operations. It is highly cost effective and an ideal tool for presenting complex ideas in any public consultation, presentation or litigation process. This presentation will focus on how to combine the realistic human and trasportation simulations in a 3D visualization along with Urban design elements. The use of simulation in all 3D visualization projects gives an accurate results to planners, engineers, architects and emergency response department to test and approve the design of the infrastructure. With this technology we have created stunning visualization and provide solutions to multi billion projects. With the integrate of 3D visualisation software with the Traffic Micro simulation tools to create a close to real environment in terms of behaviour, volumes, and routings. Caliberated and Validated Micro simulation models are being combined with the powerful rendereing tool to visualise proposals before they are implemented on ground.
Xiang Feng and Wanggen Wan
Shanghai University, China & University of Technology, Sydney, Australia
Title: Physically-based Invertible Deformation Simulation of Solid Objects
Biography:
Xiang Feng is a PhD student of School of Communication and Information Engineering, Shanghai University. He is also a member of Institute of Smart City, Shanghai University. He received his BE Degree at School of Communication and Information Engineering, Shanghai University in 2011. Since 2011 he has been doing Mater and PhD at School of Communication and Information Engineering, Shanghai University. He is a Dual Doctoral degree student between Shanghai University and University of Technology, Sydney. He was awarded the CSC Scholarship (China Scholarship Council) to study at University of Technology, Sydney between August 2014 and September 2015. He has authored six papers in internationally renowned journals and conferences in the areas of physically-based deformation and animation, and 3D modelling and reconstruction. He has been involved in several research projects including General Program of National Natural Science Foundation of China and National High Technology Research and Development Program of China.
Dr. Wanggen Wan has been a Full Professor at School of Communication and Information Engineering, Shanghai University since 2004. He is also Director of Institute of Smart City, Shanghai University and Dean of International Office, Shanghai University. He is vice Chair of IEEE CIS Shanghai Chapter and Chair of IET Shanghai Local Network. He is IET Fellow, IEEE Senior Member and ACM Professional Member. He has been Co-Chairman for many well-known international conferences since 2008. His research interests include computer graphics, video and image processing, and data mining. He has authored over 200 academic papers on international journals and conferences, and he has been involved in over 30 research projects as Principal Investigator. Dr. Wanggen Wan received his PhD degree from Xidian University, China in 1992. From 1991 to 1992, he was a Visiting Scholar at Department of Computer Engineering, Minsk Radio Engineering Institute, Former USSR. He was a Postdoctoral Research Fellow at Department of Information and Control Engineering, Xi’an Jiao-Tong University from 1993 to 1995. He was a Visiting Scholar at Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology from 1998 to 1999. He was a Visiting Professor and Section Head of Multimedia Innovation Center, Hong Kong Polytechnic University from 2000 to 2004.
Abstract:
With an increased computing capacity of computer, physically based simulation of deformable objects has gradually evolved into an important tool in many applications of computer graphics, including haptic, computer games, and virtual surgery. Among physically based simulation, large deformation simulation of solid objects has attracted many attentions. During large deformation simulation, especially interactive simulation, element inversion may arise. In this case, standard finite element methods and mass-spring systems are not suitable because they are not able to generate elastic internal forces to recover from the inversion. This presentation will describe a method for invertible deformation of solid objects. We derive internal forces and stiffness matrix of invertible isotropic hyperelastic material from the energy density function. This method can be applied to arbitrary isotropic hyperelastic material when the energy density function of deformable model is given in terms of strain invariants. In order to achieve realistic deformation, volume preservation capacity is always pursued as an important property in physically based deformation simulation. We will discuss about the volume preservation capacity of three popular invertible materials: Saint-Venant Kirchhoff material, Neo-Hookean material and Mooney-Rivlin material from the perspective of the volume term in the energy density function. We will demonstrate how the volume preservation capacity of these three materials changes with their material parameters, such as Lame coefficients and Poisson’s ratio. Since the process of solving the new positions of mesh object can be transferred to independently solving the displacement of each vertex from the motion equilibrium equation at each time step, it enables us to utilize CPU multithread method to speed up the calculations. We will also present a CPU multithread implementation of internal forces and stiffness matrix.
Seyed Reza Hashemi
Azad University, Iran
Title: Hardware-in-the-loop simulation of a jet engine fuel control unit Using LabView
Biography:
Seyed Reza Hashemi born in May 19 , 1986, received an BSc from Mechanical Engineering Department of Azad University of Najafabad, and an MSc from the Mechanical Engineering Department, Iran University of Science and Technology (IUST), Narmak, Tehran. He is now working as an engineering researcher on Industrial Automation, Mechatronics, Motion Control, and Robotics in his private company and he also works part time as a research assistant in Azad University.
Abstract:
Hardware-in-the-loop (HIL) is a type of real-time simulation test that is different from a pure real-time simulation test due to a real component added to the loop. Through applying the HIL technique, a component of a system can be tested physically in almost real conditions. Not only can this test save time and cost, but also there remain no concerns about the test safety. The tested component is often an electronic control unit (ECU), since most dynamic systems, especially in aerospace and the automobile industry,have a main controller (ECU). Sometimes, HIL is an area of interest for evaluating the performance of other mechanical components in a system. Since HIL includes numerical and physical components, a transfer system is required to link these parts. The transfer system typically consists of a set of actuators and sensors. In order to get accurate test results, the transfer system dynamic effects need to be mitigated. The fuel control unit (FCU) is an electro-hydraulic component of the fuel control system in gas turbine engines. Investigation of FCU performance through HIL technique requires the numerical model of other related parts, such as the jet engine and the designed electronic control unit. In addition, a transfer system is employed to link the FCU hardware and the numerical model. The objective of this study was to implement the HIL simulation of the FCU using LabView and MATLAB. To get accurate simulation results, the inverse and polynomial compensation techniques were proposed to compensate time delays resulting from inherent dynamics of the transfer system. Finally, the results obtained by applying both of the methods were compared.
- Computational Photography
Biography:
Kari Pulli is VP of Computational Imaging at Light. Previously, he led research teams at NVIDIA Research (Senior Director) and at Nokia Research (Nokia Fellow) on Computational Photography, Computer Vision, and Augmented Reality. He headed Nokia's graphics technology, and contributed to many Khronos and JCP mobile graphics and media standards, and wrote a book on mobile 3D graphics standards. Kari holds CS degrees from Univ. Minnesota (BSc), Univ. Oulu (MSc, Lic. Tech.), Univ. Washington (PhD); and an MBA from Univ. Oulu. He has taught and worked as a researcher at Stanford, Univ. Oulu, and MIT.
Abstract:
Nowadays, Monitoring roadways to ensure safety of vehicles and pedestrians is a challenging problem because of the high volume of traffic. The development of intelligent and omnipresent systems for automatic monitoring of modern roadway becomes indispensable. With the technological advances in sensors design, communication, computer vision and distributed inferences are stimulating the development of new innovative and intelligent techniques that will help transportation agencies and enforcement officers to ensure safety and improve traffic flow. Visual sensor network technology is seen to play an important role in such application. However, the aggregation and the interpretation of distributed visual information in real-time is still a biggest challenge. The complexity of such operations is mainly caused by the presence of multilevel uncertainty. Uncertainty in trajectory estimation of vehicles, visual signatures of vehicle and pedestrian, travelling time across visual-sensors, poses, etc. This explosion of uncertainty will certainly affect the global decision of the automated roadway monitoring systems. The major question should be asked today is how we can quantify this explosion of uncertainty to improve the decisional process of the automated visual monitoring systems? Through this talk, I will attempt to answer this question through the presentation of new approaches that integrate a combination of multi-level distributed artificial intelligence, dynamic computer vision techniques and filtering theory.
- Virtual and Augmented Reality
Session Introduction
John Quarles
University of Texas at San Antonio, USA
Title: Virtual Reality for Persons with Disabilities: Current Research and Future Challenges
Biography:
Dr. John Quarles is an assistant professor at the University of Texas at San Antonio in the department of computer science. Dr. Quarles is both a virtual reality researcher and a multiple sclerosis patient, who has an array of disabilities. This gives him a unique perspective on how virtual reality can potentially improve the quality of life of persons with disabilities. In 2014, he received the prestigious National Science Foundation’s CAREER award for his work in this area.
Abstract:
Immersive Virtual Reality (VR) has been in research labs since the 1960s, but it will soon finally make it into the home (hopefully). Facebook’s $2 billion acquisition of Oculus, a small Kickstarter funded startup for immersive head mounted displays, was a historical landmark in 2014 towards the goal of affordable, home-based VR systems. However, what impact will this have on persons with disabilities? Will at-home VR be universally usable and accessible? Based on the current research in VR, there are many challenges that must be overcome for VR to be usable and beneficial for persons with disabilities. Although researchers have studied fundamental aspects of VR displays and interaction, such as the effects of presence (i.e., the sense of ‘being there’, the suspension of disbelief), interaction techniques, latency, field of view, and cybersickness, etc., almost all of the prior research has been conducted with healthy persons. Thus, it is not known how to effectively design an immersive VR experience for persons with disabilities, which could have a significant impact on emerging fields like VR rehabilitation and serious games. This talk explores what we know (or what we think we know) about how persons with disabilities experience VR and highlights the grand challenges that if met, could significantly improve quality of life for persons with disabilities.
Sunil Thankamushy
Mount San Antonio College, USA
Title: Building my Educational Augmented Reality Game App
Biography:
Sunil Thankamushy is a US based seventeen-year video game industry professional who was part of core teams that developed highly acclaimed, and successful video game franchises such as CALL OF DUTY™: FINEST HOUR™, MEDAL OF HONOR™ etc. A graduate of UCLA, Sunil was hired by DreamWorks Interactive studio as one of its first animators. After seven years of working at DreamWorks Interactive, and later Electronic Arts, he joined hands with other game veterans to co-found Spark Unlimited™, a game studio based in Los Angeles. Five years after its inception, Spark has a long body of work that includes helping launch the now multi-billion dollar CALL OF DUTY™: FINEST HOUR™ franchise, TURNING POINT™: FALL OF LIBERTY™, LEGENDARY™, LOST PLANET 3™ etc. Blending technology and animation has been a passion for Sunil. In every stage of his career, he has successfully created animation paradigms and technology to improve the level of immersion in the virtual environment of the game, and heighten the experience of realism for the player. After shipping more than 8 video game titles, Sunil decided to change his life direction and set up DEEPBLUE Worlds Inc, a knowledge-based games studio to make innovative products for children. His most current product is an Augmented Reality based mobile app called, DINO ON MY DESK. Sunil recently joined Mt.San Antonio College, Walnut, California. as professor of gaming and animation. He lives with his wife Diana, and 2 children in beautiful San Diego, California.
Abstract:
This is a talk describing my experiences with my team, using the strengths of Augmented Reality (AR) to design a fun and educational app series. The name of the product is, Dino On My Desk. The core technologies we used is the Qualcomm Vuforia as the AR platform, in conjunction with Unity for the gaming engine. As a newcomer into the field at the time, we found that our best resource was our own DIY spirit to crack into the area. I hired a loosely networked team of developers from around the world, including past students of mine (I teach animation and gaming at Mt.SAC college in California) to get the job done. The initial iteration was a 'confidence building exercise' for us all, and see a mockup of the product. The proof of this was in the fact that we were, with very few features, able to entertain test audiences running our AR app on their mobile devices. The next two iterations were built over the previous ones, each time tacking on functionality and engagement methodically. I am a firm believer in the idea that to be effective, a product has to leverage the uniquenesses offered by the technology it is built on. In the process of building this product, we were continually uncovering what the uniquenesses in interaction that AR offered, were. An overview of the AR genres that have evolved over the past few years, and the companies behind it show a trajectory that starts from the 'Magical', through to 'Function-driven', and finally to 'Enrichment-driven'. Finally, I would like to demo my product that started the journey for me in the mesmerizing field of Augmented Reality.
Adam Watkins
Augmented Ideas, LLC, San Antonio, Texas, USA
Title: Participatory Museum Experiences…Augmented
Biography:
Adam Watkins is the CEO of Augmented Ideas, LLC (http://www.augmentedideas.com) and Professor of 3D Animation & Game Design (http://www.uiw3d.com) in the School of Media & Design at the University of the Incarnate Word in San Antonio. Watkins holds an MFA in 3D Animation and a BFA in Theatre Arts from Utah State University. He is the author of 12 books and over 100 articles on 3D animation and game design.
Abstract:
In two recent exhibitions, the McNay Art museum in San Antonio, Texas was looking for ways to convert visitors into participants. In it’s search for ways to engage patrons, the McNay partnered with Augmented Ideas, LLC, led by two University of the Incarnate Word professors to create new experiences centered around the exhibitions. The first exhibition - Real/Surreal¬- was a traveling exhibition of surreal, hyper real and realistic paintings. In this exhibit, augmented reality was used to create a discovery “game” in which the visitor finds visual clues within the painting. Once found, the clue unlocks questions, activities, and information about the paintings on top of the paintings themselves. This activity encourages visitors to look more carefully and actively at the paintings, and allows a variety of multi media experiences without interfering with the ability of the museum patron to experience the original art. The second exhibition - CUT! Costumes and the Cinema - patrons used their mobile device to collect virtual versions of the costumes on display. By using the camera features of the mobile device, they were then able to “try on” these costumes in a virtual dressing room. The results of this virtual dressing room could be shared with friends and the McNay for use on their Facebook page. Together, using unobtrusive augmented reality techniques, the McNay was able to engage a new generation of patrons, provide an entirely new level of interaction and information without using any exhibition space or imposing on the original art works.
David Mesple
Rocky Mountain College of Art and Design
Title: The Virtual Normative Body, Fact or Fiction?
Biography:
David Mesple’ is an American artist who exhibits around the world. His work has been profiled in texts, magazines, music CD’s, and public television presentations. He is one of the few contemporary artists to be honored with a 2-person exhibit with one of the Masters of Western Art- Rembrandt van Rijn. He is a non-dominant left-brained and right-brained artist, capable of linear, multi-linear, and non-linear thinking, and does not compartmentalize information, nor assert that knowledge resides exclusively within certain disciplines or domains. David believes that all information lies on a spectrum of immense complexity and diversity, and is available to all problem-solvers. Mesple’ is a Professor of Art and Ideation at Rocky Mountain College of Art and Design, and is working on his Interdisciplinary PhD in “Virtuosity Studies” combining Fine Arts, Neuroscience, Physics and Philosophy.
Abstract:
There are three representative types of virtual normative bodies; virtual representation/simulacra/mimesis of an actual human normative body, the virtual normative human body within the genre of CGI-manifested characters, the performance of the virtual normative human that is not embodied visually. The first representation/simulacra/mimesis has become alarmingly believable. Audiences struggle to detect virtuality in cases where exact mimesis is the goal. But just as the Greeks discovered when they were able to make exact marble replicas of the human body, the neurological trait of being “hardwired to abstraction” (Ramachandran), led to making non-normative human sculptures, increasing aesthetic appeal. We see this in representations of the body in art and advertising today, so it comes as no surprise to see mimesis altered for purely aesthetic purposes, not just for super-natural narratives. In a time where the real and the virtual are becoming inseparable, philosopher Paul Virilio describes a new upheaval to our real-time perspective, akin to the effect of perspective created during the Quattracentro. Virilio describes “a very strange kind of perspective, a “stereo” perspective, real space- real time, which gives us another kind of “relief”, forcing reconfigurations of culture and virtual characters within the mediums of film and performance. The history of performing the virtual normative body in film and theater may begin with the sotto voce of an invisible actor, Rosaline, in film versions of Romeo And Juliet, the Wizard of Oz, culminating in Samantha, the non-normative, non-embodied character in Spike Jonze’s “Her”. As Virilio’s “stereo” perspective becomes normative, this paper focuses on how performing the normative body, virtually, redefines roles of actors, directors, and audiences.
David Mesple
Rocky Mountain College of Art and Design, USA
Title: The Virtual Normative Body, Fact or Fiction?
Biography:
David Mesple’ is an American artist who exhibits around the world. His work has been profiled in texts, magazines, music CD’s, and public television presentations. He is one of the few contemporary artists to be honored with a 2-person exhibit with one of the Masters of Western Art- Rembrandt van Rijn. He is a non-dominant left-brained and right-brained artist, capable of linear, multi-linear, and non-linear thinking, and does not compartmentalize information, nor assert that knowledge resides exclusively within certain disciplines or domains. David believes that all information lies on a spectrum of immense complexity and diversity, and is available to all problem-solvers. Mesple’ is a Professor of Art and Ideation at Rocky Mountain College of Art and Design, and is working on his Interdisciplinary PhD in “Virtuosity Studies” combining Fine Arts, Neuroscience, Physics and Philosophy.
Abstract:
There are three representative types of virtual normative bodies; virtual representation/simulacra/mimesis of an actual human normative body, the virtual normative human body within the genre of CGI-manifested characters, the performance of the virtual normative human that is not embodied visually. The first representation/simulacra/mimesis has become alarmingly believable. Audiences struggle to detect virtuality in cases where exact mimesis is the goal. But just as the Greeks discovered when they were able to make exact marble replicas of the human body, the neurological trait of being “hardwired to abstraction” (Ramachandran), led to making non-normative human sculptures, increasing aesthetic appeal. We see this in representations of the body in art and advertising today, so it comes as no surprise to see mimesis altered for purely aesthetic purposes, not just for super-natural narratives. In a time where the real and the virtual are becoming inseparable, philosopher Paul Virilio describes a new upheaval to our real-time perspective, akin to the effect of perspective created during the Quattracentro. Virilio describes “a very strange kind of perspective, a “stereo” perspective, real space- real time, which gives us another kind of “relief”, forcing reconfigurations of culture and virtual characters within the mediums of film and performance. The history of performing the virtual normative body in film and theater may begin with the sotto voce of an invisible actor, Rosaline, in film versions of Romeo And Juliet, the Wizard of Oz, culminating in Samantha, the non-normative, non-embodied character in Spike Jonze’s “Her”. As Virilio’s “stereo” perspective becomes normative, this paper focuses on how performing the normative body, virtually, redefines roles of actors, directors, and audiences.
Kenneth Ritter
University of Louisiana (UL) in Lafayette, USA
Title: Overview and Assessment of Unity Toolkits for Virtual Reality Applications
Biography:
Kenneth Ritter is a research assistant and graduate student at University of Louisiana (UL) in Lafayette, Louisiana. Ritter is working on a PhD in Systems Engineering with an expected graduation date in December, 2016. Ritter obtained a Masters of Science in Solar Energy Engineering from Högskolan Dalarna in Borlange, Sweden. At UL, Ritter has directed the creation of the Virtual Energy Center, an educational game using a scale CAD model of the Cleco Alternative Energy Research Center in Crowley, Louisiana. Ritter has experience with AutoCAD, Soidworks, Unity3D, and programming with C# and Javascript. Currently, Ritter is working to develop an immersive networked collaborative virtual reality environment for education about alternative energy technologies.
Abstract:
As the interest in Virtual Reality (VR) increases, so does the number of software toolkits available for various VR applications. Given that more games are being made with the Unity game engine than any other game technology, several of these toolkits are developed to be directly imported into Unity. A feature and interaction comparison of the toolkits is needed by Unity developers to properly suit one for a specific application. This paper presents an overview and comparison of several virtual reality toolkits available for developers using the Unity game engine. For comparing VR interaction a scene is created in Unity and tested at the three-sided Cave Automatic Virtual Environments (CAVE) at Rougeou VR Lab. In the testbed scene, the user must disassemble the major components of the Electrotherm Green Machine at the Virtual Energy Center. The three toolkits that met the criteria for this comparison are getReal3D, MiddleVR, and Reality-based User Interface System (RUIS). Each of these toolkits can be imported into a Unity scene to bring VR interaction and display on multi-projection immersive environments like CAVEs. This paper also provides how to guides which can easily assist users to install and use these toolkits to add VR applications to their Unity game. A comparative analysis is given on performance, flexibility and ease of use for each toolkit regarding VR interaction and CAVE display. MiddleVR was found to have the highest performance and most versatile toolkit for CAVE display and interaction. However, for some display applications such as CAVE 2, the getReal3D toolkit maybe better fitted. Regarding cost, RUIS is the clear winner as it is available for free under the Lesser General Public License (LGPL) Version 3 license.
Biography:
Andres Montenegro is the coordinator of the Modeling and Animation Concentration Area from the Department of Visual Communication and Design, at the College of Visual and Performing Arts at Indiana University-Purdue University, Fort Wayne, Indiana. His work develops immersive environments using real time 3D animations while integrating physical computing in installations based on interactive responses and multichannel projections. He has extensive experience with software and hardware oriented toward the generation of different styles of rendered images. Painting is his main source of inspiration and subject of research.He received his BFA in Art and Education from University of Chile in 1986, his MA from University of Playa Ancha Chile in 1996, and his MFA in Digital Arts from University Of Oregon in 2006. While he was studying there he was awarded the Clarice Krieg Scholarship, and University of Oregon Scholarship in 2004, 2005, and 2006.
Abstract:
This presentation will articulate the conceptual and practical implementation of an interactive system based on animations and 3D models. It will utilize the Augmented Reality quick responses (QR) display graphics. The proposed model will open a discussion about how to display a dynamic navigation within an artificial setting or environment created through AR as well. Augmented Reality in the world of computer graphics is simply defined as the action of superimposing, via software generation, an artificial construction (computer generated) over the real world surface. This visualization process occurs when the camera of a mobile device like the iPhone, iPad, or other holographic optical based gadget, perceives and exposes graphics and images linked to a marker component that is attached to a real world object. Today the potential of interactive animations and images combined with text makes the content development in Augmented Reality a very promising venue to implement an artistic narrative based on multiple responses. Of course the viewer will be able to organize or manipulate this system. The same conceptual and practical model can be implemented for Virtual Reality immersive environments. In this presentation there will be several examples developed by my students, including my personal projects as well. The audience will appreciate the use of tactile gestures, body movements (through accelerometers) and other sensing capabilities provided by mobile devices (based on Android, or iOS). The ultimate goal of the presentation is to feature a compelling narrative based on an experiential phenomenological approach. It will be achieved by the manipulation of animations, images, 3D models, and virtual environments.
Biography:
Traditional and CG Modeler Animator with a BA in Visual Communications from SAIC with a focus on Animation is creating virtual worlds via Wholebitmedia. He has made the crossover from animation to game programming using traditional concepts in the 3D space for interactivity. Deeply influenced by Geek culture through manga and science fiction including William Gibson as well as Masamune Shirow, electronica and rave culture. Highly active in the Houston community as a member of various social groups such as the Houston Unity Meetup, Animate Houston, Girl Develop It, and VR Houston and is working on VR Worlds out an interest in human interactivity with metaphysics, and has incorperated that interest into his thought structures for artificial intelligences and gaming algorthyms.
Abstract:
As the networks we use overcome the significance of our actual physical data virtual worlds become a more accurate representation of a our surroundings. Virtual worlds and metaphysics go hand in hand in a matrix of accidental artificial intelligence in the form of statistical data collected about our activity. When we wonder about the possibility of virtual versus real we must consider our need for a metaphysical connection with the technology which goes beyond the data. Our connection to the real world is lost when we fail to realize the potential of the networks we use. During this talk the speaker will engage the audience to search for or explain different types of metaphysical experiences and re-evaluate them as part of a paradigm for artistic endeavor. As we move into an age where computers will no longer be limited in computing power so to will the artists be free to be unlimited in their creativity. We currently live in a dark age metaphorically speaking when it comes to communications between individuals not only on, but also off the grid. Though all of us are connected though technology the use of these tools at a higher level of communication remains in its infancy. Metaphysics is thought of as something illusory or spiritual but most can claim to have experienced a moment in time whereupon time itself represented itself as something intangible. As humanity moves into this new cyber-terrain where physics takes on less significance than metaphysics we lose sight of the potential of humanity behind the benchmark of computational ability. It has become more important to emphasize and create new languages for computers an scientists to use rather than to create a new set of standards by which humans can be measured by. The speaker plans to discuss the current state of computing as it relates to the engagement of the viewer with relevant relativistic content.
- Computer Vision
Session Introduction
Fouad Bousetouane
RTIS-Laboratory
University of Nevada Las Vegas
USA
Title: Uncertainty Quantification in Computer Vision Problem: Application to Transportation
Biography:
Dr. Fouad Bousetouane, Received his BSc in computer science and mathematics, in 2008 and Master by Research degree in Artificial Intelligence and Pattern Recognition from Badji Mokhtar University, Algeria, in 2010. He obtained his PhD in Artificial Intelligence and Computer Vision from UBMA-University (Algeria), co-supported by LISIC-Laboratory (France) in 2014. He is valedictorian, member of International Association of Computer Science and Information Technology (IACSIT) and Computer Vision Foundation (CVF). He collaborated with researchers from CNRS-Lille (France) and LISIC-laboratory (France) for developing computer vision algorithms for multi-object tracking, handoff management, dynamic/static occlusion handling and re-identification across multi-sensor networks. He is Co-Founder of Robotics and Intelligent Computing Startup. He authored many technical articles in machine learning, computer vision, satellite image processing, and served as a reviewer for top ranked journals and conferences including ( IET-Image Processing journal, IEEE transaction journal on Intelligent Transportation Systems, IEEE-IROS2012 and IEEE-ITSC-2015). Currently, he is a postdoctoral researcher in computer vision and Artificial intelligence at Real-Time Intelligent Systems (RTIS) laboratory, University of Nevada, Las Vegas, USA. His research interests include artificial intelligence, pattern recognition, probabilistic graphical models and Bayesian computation, machine learning, computer vision and deep learning.
Abstract:
Nowadays, Monitoring roadways to ensure safety of vehicles and pedestrians is a challenging problem because of the high volume of traffic. The development of intelligent and omnipresent systems for automatic monitoring of modern roadway becomes indispensable. With the technological advances in sensors design, communication, computer vision and distributed inferences are stimulating the development of new innovative and intelligent techniques that will help transportation agencies and enforcement officers to ensure safety and improve traffic flow. Visual sensor network technology is seen to play an important role in such application. However, the aggregation and the interpretation of distributed visual information in real-time is still a biggest challenge. The complexity of such operations is mainly caused by the presence of multilevel uncertainty. Uncertainty in trajectory estimation of vehicles, visual signatures of vehicle and pedestrian, travelling time across visual-sensors, poses, etc. This explosion of uncertainty will certainly affect the global decision of the automated roadway monitoring systems. The major question should be asked today is how we can quantify this explosion of uncertainty to improve the decisional process of the automated visual monitoring systems? Through this talk, I will attempt to answer this question through the presentation of new approaches that integrate a combination of multi-level distributed artificial intelligence, dynamic computer vision techniques and filtering theory.
Junaid Baber
University of Balochistan
Pakistan
Title: Automatic Image Segmentation for Large Collections
Biography:
Junaid Baber is a Assistant Professor at the University of Balochistan. He received the M.S. and Ph.D. degrees in Computer Science from the Asian Institute of Technology in 2010 and 2013, respectively. He spent 10 months as a research intern at the National Institute of Informatics in Tokyo, Japan. His research interests include image processing, multimedia mining, and machine learning. He is a member of the IEEE. He also served as TPC and reviewer for many conferences and impact factor Journals.
Abstract:
Image segmentation is one of the most significant tasks in computer vision. Since automatic techniques are hard for this purpose, a number of interactive techniques are used for image segmentation. The result of these techniques largely depends on users feedback. It is difficult to get good interactions for large databases. On the other hand, automatic image segmentation is becoming a significant objective in computer vision and image analysis. We propose an automatic approach to detect foreground. We are applying Maximal Similarity Based Region Merging (MSRM) technique for region merging and using image boundary to identify foreground regions. The results confirm the effectiveness of the approach. This approach reveals its effectiveness especially to extract multiple objects from background.
Biography:
Adrien Gaidon is a Research Scientist at the Xerox Research Centre Europe (XRCE) in the Computer Vision group. His research interests lie in the fields of Computer Vision and Machine Learning, with a focus on automatic video understanding (e.g. behavior recognition, motion analysis, event detection) and object recognition. Adrien graduated from the ENSIMAG engineering school and obtained an MSc in Artificial Intelligence from Université Joseph Fourier, Grenoble, France, in 2008. That year, he participated in the team that won the PASCAL VOC Computer Vision competition. He, then, worked as a doctoral candidate at the Microsoft Research - Inria joint center in Paris, and in the LEAR team at Inira Grenoble under the supervision of Zaid Harchaoui and Cordelia Schmid. He got his PhD from Université de Grenoble in 2012 and joined XRCE as a permanent member in 2013.
Abstract:
Complex activities, e.g., pole vaulting, are composed of a variable number of sub-events connected by complex spatio-temporal relations, whereas simple actions can be represented as sequences of short temporal parts. In this paper, we learn hierarchical representations of activity videos in an unsupervised manner. These hierarchies of mid-level motion components are data-driven decompositions specific to each video. We introduce a spectral divisive clustering algorithm to efficiently extract a hierarchy over a large number of tracklets (i.e., local trajectories). We use this structure to represent a video as an unordered binary tree. We model this tree using nested histograms of local motion features. We provide an efficient positive definite kernel that computes the structural and visual similarity of two hierarchical decompositions by relying on models of their parent-child relations. We present experimental results on four recent challenging benchmarks: the High Five dataset [Patron-Perez et al, 2010], the Olympics Sports dataset [Niebles et al, 2010], the Hollywood 2 dataset [Marszalek et al, 2009], and the HMDB dataset [Kuehne et al, 2011]. We show that per video hierarchies provide additional information for activity recognition. Our approach improves over unstructured activity models, baselines using other motion decomposition algorithms, and the state of the art.
- Visualization
Biography:
Howard Kaplan is the head of the Advanced Visualization Center at the University of South Florida in Tampa. He uses multiple aspects of visualization as a means of study and application. Many of his visualization applications revolve around real-world data, 3D graphics and simulation, and 2d interactive media. He received a BFA from Ringling College of Art, and an M.Ed from the University of South Florida. He is currently pursuing a Ph.D, in Engineering Science, Biomedical and Chemical Engineering. HIs work has been featured in the journal Science, Wired.com, AMC Siggraph, and Discovery.com. He was also selected by the Center for Digital Education as a Top 30 Technologists, Transformers and Trailblazers in 2014.
Abstract:
Most classrooms utilize generic two-dimensional representations in the form of scientific illustrations. In this talk we discuss various academic practices that have been used to enhance learning utilizing 3d printing and digital modeling technologies. Topics will explore the integration of data to digital models and finally to physical objects. The use of multiple 3d software applications will be demonstrate the process involved with modeling, encoding, preparing, and printing digital models. This presentation will allow users to take an expanded view of interdisciplinary approaches to developing 3d print ready models with added information in the form of tactile visualizations. In this way the students can feel the object and get some sense about the concept upon which the data is based. Additionally, this would allow customizing and individualizing of educational material. By providing a physical and tactile representation, as well as the opportunity to take part in the process of creating tactile visualizations; we believe will more effectively and efficiently aid in the development of mental images, transfer of prior knowledge to new context, as well as positively contribute to shared and authentic collaborative learning experiences. As an example one particular area of interest is in using 3d printing technology as an educational tool for blind and visually impaired learners.
Biography:
Rebecca Ruige Xu currently teaches computer art and animation as an Associate Professor in College of Visual and Performing Arts at Syracuse University. Her artwork and research interests include experimental animation, visual music, artistic data visualization, interactive installations, digital performance and virtual reality. Her recent work has been appeared at: Ars Electronica Animation Festival; SIGGRAPH Art Gallery; Museum of Contemporary Art, Italy; Aesthetica Short Film Festival, UK; CYNETart, Germany; International Digital Art Exhibition, China; Los Angeles Center for Digital Art; Boston Cyberarts Festival. She has also been a research fellow at Transactional Records Access Clearinghouse, Syracuse University since 2011.
Abstract:
In recent years we have seen an increasing interest in data visualization in the artistic community. Many data-oriented artworks use sophisticated visualization techniques to express a point of view or persuasive goal. Meanwhile the attitude that visualizations can be used to persuade as well as analyze has been embraced by more people in the information visualization community. This talk shares my experience and reflection in creating data visualization as artwork via case study of two recent projects. It presents a workflow from conceptual development, data analysis, to algorithm development, procedural modeling, and then final image production. It hopes to offer insight into the artist’s effort of finding balance between persuasive goals and analytic tasks. Furthermore, it raises the question of the roles of artistic data visualization played in assisting people to comprehend data and the influence of this artistic exploration in visualization might have injected in shifting public opinions.
Biography:
Robert S. Laramee received a bachelor’s degree in physics, cum laude, from the University of Massachusetts, Amherst (ZooMass) in 1997. In 2000, he received a master’s degree in computer science from the University of New Hampshire, Durham. He was awarded a PhD from the Vienna University of Technology (Gruess Gott TUWien), Austria at the Institute of Computer Graphics and Algorithms in 2005. From 2001 to 2006 he was a researcher at the VRVis Research Center (www.vrvis.at) and a software engineer at AVL (www.avl.com) in the department of Advanced Simulation Technologies. Currently he is an Associate Professor at Swansea University (Prifysgol Cymru Abertawe), Wales in the Department of Computer Science (Adran Gwyddor Cyfrifiadur). His research interests are in the areas of big data visualization, visual analytics, and human-computer interaction. He has published over 100 peer-reviewed papers in scientific conferences and journals and served as Conference Co-Chair of EuroVis 2014, the premiere conference on data visualization in Europe. His work has been cited over 2,400 times according to Google Scholar and his research videos have been viewed over 6,000 according to YouTube.
Abstract:
With the advancement of simulation and data storage technologies and the ever-decreasing costs of hardware, our ability to derive and store data is unprecedented. However, a large gap remains between our ability to generate and store large collections of complex, time-dependent simulation data and our ability to derive useful knowledge from it. Visualization exploits our most powerful sense, vision, in order to derive knowledge and gain insight into large, multi-variate flow simulation data sets that describe complicated and often time-dependent events. This talk presents a selection of state-of-the art flow visualization techniques and applications in the are of computational fluid dynamics (CFD) and foam simulation, showcasing some of visualizations strengths, weaknesses, and, goals. We describe inter-disciplinary projects based on flow and foam motion, where visualization is used to address fundamental questions-the answers of which we hope to discover in various large, complex, and time-dependent phenomena.
- Human-Computer Interaction
Session Introduction
M. Ali Mirzaei
CNRS researcher Paris Institute of Technology, Paris, France
Title: Acceleration of Image Processing on FPGA-GPU Systems and the Effect of the Velocity on the Sensory Conflict
Biography:
M.Ali Mirzaei is working as a CNRS reseacher in ATLAS Experiment, FTK project and AMchip team in CERN. He has completed his PhD in signal processing in Paris Institute of Technology. He did his MSc in Imperial College London in Electronics engineering. He has published more than 30 papers in reputed journals and conferences and has been serving as an editorial board member/reviewer of IEEE journals.
Abstract:
Nowadays, acceleration of image processing algorithm is widely used in different imaging and display systems. FPGA and GPU are playing extremely important role in the accelerator architecture. However, over/under speed might create a sensory conflict. Sensory conflict in the oculo-vestibular dynamic has been a very fundamental research question and affected several domains including engineering, aviation, emerging technologies, car industries and so on and considered as a serious industrial challenge in display technology and advance read-time image processing systems. Finding a practical solution for the sensory conflict in the real/virtual environment is essential. Because, based on this solution a set of efficient interaction (navigation/ manipulation) interfaces maybe be proposed. Reliable research results will influence directly technologies such as aviation (flight simulators, drone land base control, unmanned vehicle control and navigation), car industry (car simulator, manufacturing, assembling and disassembling of comportment), display systems, robotics and training. In addition, it can improves the quality of the cyber product such as games, HCI and automation industries. Different teams and research groups have inquired this problem all across the world. Researchers have studied the problem from various point of view including psychology, psychophysiology, neuroscience, computer vision, Man Machine Interface (MMI), Human Computer Interaction (HCI), user study, biology, robotics and telecommunication and so on. This presentation focuses on the sensory conflict problem from modeling, signal processing and computational neuroscience perspective, however the main focused will be on signal processing. Simply, it will be shown how the speed of visual flow, texture and the distance from visual flow can affect sensory conflict in a synthetic environment. Then the result will be verified using modeling and experimental data analysis. Nearly entire display system and test-bench was developed on windows platform and NVidia Quadroplex GPUs. The detection filters were developed and accelerated on FPGA. To simplify the development procedure for the newcomer developer and future researcher, all the GPU Kernels, C++ code, MATLAB engine and wireless network telecommunication and interfacing toolboxes were wrapped under JavaScript in the software platform which makes development very fast and easy. Enormous efforts, debugging, software tests were made for building such a user friendly and handy platform.
Biography:
Syed Zain Masood holds a PhD in Computer Science, with emphasis in the field of Computer Vision and Machine Learning, from University of Central Florida (UCF). He has expertise pertaining to action/gesture recognition, object detection, shadow detection/removal, face recognition and computational photography. Most of his PhD work is focused on recognizing actions in complex scenes as well as achieving a balance between accuracy and latency for action recognition for Human Computer Interaction (HCI) environments. He is one of the co-founders of a startup company called Sighthound, Inc. that deals with intelligent surveillance software for residential use. He heads the research department and is focused on tasks like object detection, face detection and recognition, tracking, etc.
Abstract:
An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system’s feedback to lag behind user actions and thus significantly degrades the interactivity of the user experience. This talk presents algorithms for reducing latency when recognizing actions. We use a latency-aware learning formulation to train a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks. Additionally, we employ GentleBoost to reduce our feature set and further improve our results. We then present experiments that explore the accuracy/latency trade-off over.
- Rendering
Session Introduction
Scott Swearingen and Kyoung Lee Swearingen
University of Texas at Dallas, USA
Title: Pushing the Physical Arts Deeper into Real-Time Rendering
Biography:
Scott Swearingen is an artist, developer, and educator who creates interactive multimedia spaces that blur the boundaries between the virtual and practical. He has been working at the intersection of art and technology for nearly 20 years specializing in the categories of digital imaging, kinetic sculpture, video games, and virtual environments. His work has been widely published and has garnered recognition from the Academy of Interactive Arts and Sciences as well as the Game Developers Choice Awards. He has collaborated on several award-winning franchises including Medal of Honor, The Simpsons, Dead Space and The Sims. Kyoung Lee Swearingen has worked in the film industry for the last decade on a variety of features and shorts including Ratatouille, Walle, UP, Cars 2, Toy Story 3, Brave, Monsters University, Presto, La Luna, The Blue Umbrella, Mater’s Tall Tales, Partly Cloudy, Ant Bully, and the Jimmy Neutron TV series. As a Technical Director of Lighting at Pixar Animation Studios, Kyoung focused on visual story-telling, mood, and look development through lighting. Her work has claimed numerous awards from the Academy Awards, BAFTA, Visual Effects Society, The American Film Institute, as well as many others.
Abstract:
The primary motivation behind our research is to push the physical arts deeper into the CG pipeline for rendering virtual environments. Using photogrammetry and 3d printing technologies, our process enables sculptors and painters to see their physical artworks move beyond the constraints of preproduction. Deviating from the traditional video game production pipeline, we print our low-resolution collision models as physical objects that are then sculpted upon and later scanned for reintegration. In addition to this process, we will also discuss calibration methods that strengthen our ability to iterate quickly, as well as maximizing texture resolution in order to maintain the integrity of the original artwork. By interjecting new technologies into established production models, we have created a unique pipeline for studios and new opportunities for artists.
Tom Bremer
The DAVE School, Visual Cue Studios LLC
Title: Render Passes: Taking control in CG Compositing
Biography:
Tom Bremer started his artistic career more than 10 years ago as a hobby, and quickly realized his potential. After moving to Los Angeles in 2007, he has worked with many studios including Rhythm and Hues, Disney, Pixomondo, and Zoic Studios, where, for his work on CSI: Crime Scene Investigation, won a prime time Emmy award for outstanding visual effects. He has also won multiple Telly awards for his work throughout the years. Some credits also include “The Hunger Games”, “Disney’s Planes”, “Terra Nova”, and “Grimm”. Tom is currently the Production Instructor at The Digital Animation & Visual Effects School in Orlando, Florida.
Abstract:
While it used to be behind the scenes movie magic, the average person on the street now knows that taking a VFX element or green screen footage of an actor requires a certain amount of compositing in a 2D compositing package such as The Foundry’s Nuke or Adobe’s After Effects. What many people don’t know is that compositing fully CG rendered films requires just as much, if not more, compositing of elements. My lecture will introduce the audience to what render passes are, the benefits and drawbacks of compositing CG assets using passes, some basic techniques, but also some more advanced techniques that even artists working in the industry might not realize is a possibility. I would also like to show a recent animated short I wrote and directed which used the same techniques that I would be speaking about, as well as show a breakdown of some of the animated shots. This lecture will have something for everyone and will help demystify the art of render passes and compositing.
- Game Design and Development
Session Introduction
Jingtian Li
University of the Incarnate Word, School of Media & Design San Antonio, Texas, USA
Title: Creative Texturing and Evolutionary Tools
Biography:
Jingtian Li is an individual 3D Character Artist and Animator, he is also Assistant Professor of 3D Animation & Game Design (http://www.uiw3d.com) in the School of Media & Design at the University of the Incarnate Word in San Antonio. He also have been working in a variety of animation studios like Beijing Daysview Digital Image Co, Passion Picture NYC. Jingtian holds an MFA in Computer Animation form School of Visual Arts in New York City, and also a BFA of Digital Media from China Central Academy of Fine Arts.
Abstract:
Texturing is one of the most important part of 3D animation and game industry, in the past 5 years, driven by the industry, a new set of tools emerged to enhance our working pipeline in texturing. Now, instead of Adobe Photoshop, there are plenty of evolutionary tools is being used in the industry, like The Foundry Mari, Allegorithmic Substance Designer, Allegorithmic Substance Painter, Quixel DDO. With dynamic layers and masks, multi-Channel painting, particle brushes, edge and curvature detection, position map, PBR shader and rendering, real world capture, multiple map baking options, not only the working pipeline are changed, but also the level of details, the complexity of channels, the network of shading are also evolving, the time taken to texture is reduced, texturing has never been so easy, exciting and creative. With modern game engine and renderers like Epic UE4 and Solid Angle Arnold, create mind blowing graphics is never been so efficient and enjoyable.
Meng-Dar Shieh and Wu Ssu Yi
Industrial Design Department, National Cheng Kung University, Taiwan
Title: The Research of the Correlation between Camp Board Game Character Mechanic and the Players’ Emotional Responses-Using the Lupus of Tabula as an Example
Biography:
Meng-Dar Shieh, Associate Professor, Department of Industrial Design, National Cheng Kung University, Tainan, Taiwan. He got his Ph.D. (1990) and Master degrees (1986) in Mechanical Engineering, University of Florida, USA. His research interests include Neural Network, Kansei Engineering, Support Vector Machines, Concurrent Engineering, Computer Graphics Simulation, Virtual Reality, Computer Aided Design and Manufacturing, Product Design, System Integration and Networking, Robotics in Medical Applications, Quality Control, Digital Design, E-commerce.
Abstract:
People love camp board games because of several reasons, including a highly player-targeted feature, the uncertainty of identity trust and fascinating stories. However, designing character’s ability and function is an essential part of many camp board games. Upon observing players’ emotions caused by the characters they use, we can measure if players are highly interested in this game and verify if players’ emotional reactions accord with the ones that designers have expected when designing the game. Therefore, founding a set of inspection steps for game rules is the current issue that needs further development in board games. This study focuses on player’s experience of using hidden characters in camp board games so that we can explore the impacts on players’ emotions when they act as different characters. The correlation between players’ emotions and events they meet can be analyzed via physiological measurement and video content analysis. These methods can quantify the player’s emotions caused by asymmetric role mechanics. By observing player’s physiological changes, we would know if the game mechanic design has caused expected results. This method applies to camp board games with hidden characters, helps game development teams inspire directions for futuristic products and improvements towards current products and elevates player’s satisfaction towards board games.
Masasuke Yasumoto
Kanagawa Institute of Technology, Japan
Title: Shadow Shooter - All-Around Shooter Game with E-Yumi 3D
Biography:
Assistant professor in Kanagawa Institute of Technology since 2015. Masasuke Yasumoto born 1980 in Japan, he received Ph.D. (film and new media) from Tokyo University of the Arts in 2010. He was an assistant professor in Tokyo University of Technology from 2011 to 2015. He is an interactive artist, researcher and engineer working at the intersection of art and science. His work covers a range of disciplines including interactive arts, computer graphics, physical interface, mobile applications and video game.
Abstract:
“Shadow Shooter” is a VR shooter game. It uses a bow interface called “e-Yumi 3D” and real physical interactive content that changes a 360-degree all-around view in a room into a virtual game space. This system was constructed by developing my previous interactive “The Light Shooter” content based on “The Electric Bow Interface”. The Light Shooter uses a real traditional Japanese bow de vice - the Electric Bow Interface. This device recognizes all directions that arrows are shot in and projects real-scale targets. However, it displays directions only with a standing projector; thus, users cannot get a strong sense of reality all around them. Shadow Shooter expands the virtual game space to all the walls in a room, however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device, it is based on an archery bow and also includes a mobile laser projector, a 9-axis sensor, strain gauges, micro computer and its control board, and a Windows PC. This system consists of shooting content that is based on the concept of searching. Shadow Shooter provides 360-degree all-around virtual space, but the projected image is displayed only at the front. Therefore, I set the content such that a player searches for attacking enemies from all directions and shoots them with a bow on virtual images just like shining a flashlight. I also added image content enabling the use of biological motions as well as searching for enemies based on information from their shadows.
Michael W Wikan
Booz Allen Hamilton, San Antonio
Title: Creative Software Development: Beyond the Game Industry Horizon
Biography:
Mike Wikan is a long time Game Industry creative leader, having developed highly successful entertainment software for Nintendo, Atari, id Software, and GTinteractive (The Metroid Prime Series and Donkey Kong Country Returns) among many others. Additionally he has worked in STEM related Software Development with partners like Sesame Workshop and the National Science Foundation. He is currently Creative Director for Booz Allen Hamilton’s IMAG group in San Antonio and creates interactive and immersive training for the US Armed Forces.
Abstract:
A discussion of the opportunities now emerging in the global marketplace for traditional Game Development, Art, and Game Design skills that are not within the traditional Videogame Entertainment Industry. Mike Wikan will address STEM, training, defense, and educational software in general, including new opportunities leveraging VR/AR Hardware to create new interactive software opportunities that will transform how the world will learn.
Matthew Johnson
University of South Alabama, USA
Title: Graphic Design in the Classroom: Adjusting to Educate the New Digital Generation
Biography:
Matthew Johnson has been a Professor of Graphic Design at the University of South Alabama for eleven years and a designer for nearly 20. He holds an MFA in Graphic Design from Louisiana Tech University, and is an internationally awarded designer, and published children’s book author and illustrator. Matthew’s experiences in the field, as well as the classroom, allow him to present confidently regarding topics of Graphic Design and computer arts. He lives for visual gratification and the ability to create. His world revolves around visual communication and the satisfaction that comes from a piece of work well done.
Abstract:
During this conference presentation I would like to address those concerns when trying to bridge the gap between being an observer of visual communications and becoming an active producer of those items. I will approach the subject primarily as it relates to the design educator and their students. In order to produce a proficient, high calibre graduate who can excel in a new digital world, educators must evolve with an ever-changing population of students. With the ease of communication and shorter attention spans affecting retention abilities, changes in the classroom setting are inevitable. The way in which information is relayed must reflect a culture where learning styles have become more interactive. We are faced with an educational pendulum that is swinging away from the lecture-and-examination style of teaching. Changes can come from anywhere along the broad spectrum of course elements. They range from small techniques to complete overhauls in classroom dynamic and management. Pulling from eleven years of personal experience in higher education I will introduce the following:
• Rethinking course structures through policies and objectives
• Team based learning through collaborative design work
• Transitioning from a print based classroom to a more digital environment
• Making visual changes to keep your materials fresh
Emphasis will be put on personal alterations that will help to rebuild a classroom into one that develops an unshakeable passion for graphic design and computer arts. An educator should feel confident that they are fueling a dedication to art and computer based creativity. Although this presentation is not centered specifically on interactive design or motion graphics it does touch on the core of Graphic Design, specifically those individuals who will influence the future of the field. Understanding the new audience is crucial to growing a stronger graphic designer and, in turn, a field that remains relevant.
Tom Berno
Texas State University, USA
Title: The Game Layer - The UX and UI of a disruptive fantasy sport startup
Biography:
Tom Berno is an experienced brand communications specialist with more than 20 years of practice in the field. His career encompasses corporate communication design, brand development, executive administration, and thought leadership in design thinking. His client experience spans a breadth of industries including Financial Services, Real Estate Development, Aerospace, Technology, Cultural & Entertainment, and Hospitality services. As a past employee at several nationally prominent design and advertising firms based in Texas, Tom continues to collaborate with them through his design consultancy; idea21.He is currently a Professor of Communication Design at Texas State University specializing in corporate brand communications curriculum for both BFA and MFA programs. He recently completed a 3-year term as Associate Director of the School of Art and Design. Tom’s design work has received extensive professional recognition throughout his career, including the noted publication Brand Identity Essentials (Rockport Publishers).
Abstract:
Online fantasy sports competition constitutes a nearly $4 billion market in the U.S. with NFL fantasy play exceeding $1.6 billion in annual revenues. Yet the international football/soccer has performed poorly by comparison. The author, part of the founding team for a startup company—English Fantasy Football—describes how an innovative new model of play capitalized on new approaches to design. The platform is undergoing a successful alpha test currently, with beta expected to be active at the start of the 2015/16 Barclays Premier League football season in England. The presentation will not only reveal the rich interface, but more importantly focus on how unmet user needs drove an integrated platform-based solution. The result creates an entirely new model of fantasy competition for the world’s most popular sport
Biography:
John Carter is a 3D artist and is currently employed by Booz Allen Hamilton a government contractor for the Department of Defense in San Antonio, Texas. He attended Northwest Vista College where he developed skills in 3D animation, video special effects, game concept art and game design, as well as, traditional art skills and graphic design utilizing current industry software. John was awarded “Best in Show” at the 2011 New Vistas in Media Festival a collaboration of student technical achievements in video and media arts. Since 2012, John has served as a member of the multimedia advisory board at Northwest Vista College game development and animation department. John previously worked as an assistant to the Director of Education at Geekdom for a Rackspace hosted mobile unit offering educational enhancement projects for a STEM program. He has also worked as an adjunct faculty instructor at Northwest Vista College teaching students game design and pipeline processes. John was given the opportunity to work through a grant from Microsoft instructing local school teachers on basic programming in Kodu for later implementation in the classroom. John also volunteers his time as an instructor and mentor for the Screaming Chickens Robotic team a non-profit organization offering after school education in software technology.
Abstract:
In this talk we will address the relationship between game design and education. Training for knowledge and readiness have been at the forefront of any campaign. Time and technology has accelerated the process by which we prepare ourselves. The United States military is no exception and has hastened to catch up to technical advancements. Our armed forces are embracing the science of game design and development over traditional methods of training. These methods that have been in place for decades are getting revamped with supplemental module based training. We have come to a place where the art of game design is replacing chalkboards and complimenting text books to give an educational advantage. The fundamental elements of game design do not always apply in this regard but rather are customized to cater to the needs of the student. Game mechanics have afforded the ability to engage and immerse individuals in virtual spaces that provide room for controlled experimentation and simulation. By adapting this method of training, the student is exposed to preemptive exploratory learning in which they can apply their technical skillset.
Christopher Thibeault, Jean-Yves Hervé
University of Rhode Island, USA
Title: Event and Scene Detection for Enhancing Emulated Console Games
Biography:
Christopher Thibeault is a Ph.D. student at the University of Rhode Island and an Assistant Professor at the Community College of Rhode Island. Christopher’s research interests include image processing, computational photography, noise reduction, and super-resolution algorithms. Jean-Yves Hervé is an Associate Professor of Computer Science at the University of Rhode Island and is a founding member of the URI 3D Group for Interactive Visualization. Jean-Yves’ research interests include computer vision, modelling, and simulation, and their application to scientific visualization, robotics, and bioinformatics.
Abstract:
The audio-visual quality of video games has increased steadily over the last twenty years. The hardware of older game consoles was not capable of delivering the high-resolution graphics and sounds expected by today’s gamers. Despite this, the use of emulators to play old games continues to be very popular. To make the experience more palatable, emulators have used various forms of interpolation-based upscaling algorithms to provide a higher-resolution gaming experience. There is a limit to how much this can improve the visuals of a game. Recently, an approach based on using the virtual hardware state information inside the emulator to detect and replace game sprites proved effective in providing a substantially-enhanced set of game visuals at low computational cost. This paper investigates the modification of an existing game console emulator to perform event and scene detection to aid in the enhancement process. This will enable the system to achieve an enhancement of the audio-visual output of older games that goes beyond simply replacing sprites. It will allow for the replacement of all graphics and music assets in an emulated game in real-time. Further, it will enable the addition of new visual effects and sounds where none previously existed. All of this is done without altering the original game logic, preserving the feel of the original game. To achieve this, the state information about the virtual game console hardware is accessed and used to perform the event and scene detection at low computational cost.The system developed as part of this research uses a modified version of the FCEUX emulator for the Nintendo Entertainment System. Using this system, a substantial audio-visual upgrade is made to the appearance of a commercially released 8-bit console game, including new graphics, music, speech sounds, and lighting and fog effects.
- Computer Graphics Applications
Session Introduction
Matt Dombrowski
University of Central Florida, School of Visual Arts and Design, USA
Title: The Gamification of Modern Society: Digital Media’s Influence on Current Social Practices
Biography:
Matt Dombrowski is an Assistant Professor focusing in Digital Media at the School of Visual Arts and Design (SVAD) for the University of Central Florida based in Orlando, FL. His current line of research focuses on the melding of digital media technologies and techniques to share digital art as an interactive health tool through development of interactive physical and mental health counseling games with local mental health clinicians. Assistant Professor Dombrowski believes that the influences of digital media can be used for the betterment of our society and help positively shape its future.
Abstract:
In today’s society, how does gaming cultural while using modern digital media devices influence the causal user? Over recent years, society has witnessed the ever-growing influence and acceptance of technology and digital game concepts being incorporated in our day-to-day lives. The use of these “gamification concepts” include various psychological approaches regarding the use of technology to aid in evoking, motivating, influencing behavior, and even changing the personality of the user. Using today’s technology, users have begun to incorporate game-like point based methods to affect everything from shopping habits, education patterns and even their physical and mental personal health. With the ever-growing availability of technologies such as the Fitbit and Apple Watch, to the language learning applications Duo-lingo and Rosetta Stone, we as a society are seemingly thriving more and more on technology and gamification to influence our everyday lives. What drives us as a society to explore and accept these “seemingly empty” point based applications that influence our actions so strongly? This presentation explores the evolution of gamification in today’s society for both the good and the bad. We will delve into current, on the market applications utilizing this concept and the possible future of this approach.
Kyoung Lee Swearingen
University of Texas at Dallas, USA
Title: Story Telling Through Lighting in Animated Films
Biography:
Kyoung Lee Swearingen has worked in the film industry for the last decade on a variety of features and shorts including Ratatouille, Walle, UP, Cars 2, Toy Story 3, Brave, Monsters University, Presto, La Luna, The Blue Umbrella, Mater’s Tall Tales, Partly Cloudy, Ant Bully, and the Jimmy Neutron TV series. As a Technical Director of Lighting at Pixar Animation Studios, Kyoung focused on visual story-telling, mood, and look development through lighting. Her work has claimed numerous awards from the Academy Awards, BAFTA, Visual Effects Society, The American Film Institute, as well as many others. Kyoung received her M.F.A. from The Ohio State University, B. F. A from Savannah College of Art and Design, and B.S. in Chemistry from Sungshin Women's University in Seoul, Korea. Furthermore, she has taught at various institutes and universities throughout Korea and the United States, and is currently an Assistant Professor of Arts and Technology at UT Dallas.
Abstract:
The primary objective of lighting in animated films is to direct the audiences’ eye and not lose their attention until the film credits roll. To achieve that, maintaining continuity and establishing focal points by setting a visual hierarchy is critical to success. Another objective of lighting is to support story and emotion by creating an environment that the audience can engage in and believe the characters are a part of. To acquire this, lighters need to identify the genre of the film, portray time and place, exhibit time of day and weather, and also determine the quality of light and key light placement. Lastly, lighting adds beauty to an otherwise flat image void of shadow and atmosphere, and delivers the audience to another world. This lecture will discuss the key objectives and processes of lighting within the framework of story-telling in animated films.
Noel Lopes
Polytechnic of Guarda, Portugal
Title: GPUMLib Framework: Using the GPU to Empower Machine Learning Research
Biography:
Noel Lopes is Professor at the Polytechnic of Guarda, Portugal and a Researcher at CISUC – University of Coimbra, Portugal. Currently, he is focused on extracting information from large repositories and streams of data, using supervised, unsupervised and semi-supervised machine learning algorithms. Accordingly, a line of research being pursued, consists of developing parallel Graphics Processing Unit (GPU) implementations of machine learning algorithms with the objective of decreasing substantially the time required to execute them, providing the means to study larger datasets.
Abstract:
The amount of information being produced by humans is continuously increasing, to the point that we are generating, capturing and sharing an unprecedented volume of data from which useful and valuable information can be extracted. However, obtaining the information represents only a fraction of the time and effort needed to analyze it. Hence, we need scalable fast Machine Learning (ML) tools that can cope with large amounts of data in a realistic time frame. As problems become increasingly challenging and demanding, they become, in many cases, intractable by traditional CPU architectures. Accordingly, novel real-world ML applications will most likely demand tools that take advantage of new high-throughput parallel architectures. In this context, today GPUs (Graphics Processing Units) can be used as inexpensive highly-parallel programmable devices, providing remarkable performance gains as compared to the CPU (it is not uncommon to obtain speedups of one or two orders of magnitude). However mapping algorithms to the GPU is not an easy task. To mitigate this effort we are in the process of building an open source GPU Machine Learning Library – GPUMLib to help ML researchers and practitioners worldwide. This presentation focus on the challenges of implementing GPU ML algorithms using CUDA. Moreover, it presents an overview of GPUMLib algorithms and tools and highlights its main benefits.
Carol Luckhardt Redfield
St. Mary’s University, USA
Title: Teaching Computer Graphics by Application
Biography:
Carol Luckhardt Redfield, Ph.D. is Graduate Program Director for Computer Science and Computer Information Systems and an Professor of Computer Science at St. Mary's University. She started at St. Mary’s in 1998 after being in the computer industry for over 15 years. She specializes in computer gaming, computer-based training systems, and expert systems. She has a Ph.D. from the U. of Michigan in Computer Science and Engineering. Her thesis work was in multiplayer gaming and artificial intelligence. She has a Master’s degree in mathematics and another in control engineering. Her Bachelor’s degree is in education with double majors in mathematics and psychology. Dr. Redfield has worked for Bell Labs, IBM, LLNL, Southwest Research Institute, UTSA, and Mei Technology. She has done consulting work for WebStudy and Landmark Education. She has published 4 books and well over 50 reviewed and invited papers. She has been an invited speaker for Star Trek conventions, chaired conferences, founded a charter school, led seminars for Landmark Education, and was inducted into the San Antonio Women’s Hall of Fame. Dr. Redfield serves on committees for the National Space Society, San Antonio Space Society, John Jay Science and Engineering Academy, and Friends Meeting of San Antonio (Quakers). She plays and coaches Ultimate Frisbee. She is married to Joe Redfield, at SwRI, and has two children, Neil and Crystal.
Abstract:
The computer graphics class at St. Mary's University focuses on the application of computer graphics while learning graphics terms, some theory of how graphics tools work, and common graphics creation tools similar to Microsoft Paint, Adobe Photoshop, Adobe Flash, and Adobe Dreamweaver. Students are required to create a brand for themselves, a group, a company or an organization that they select. Inside of that brand, students create logos, a brochure, a business card, business stationary, an animation file, and a website. The website incorporates all the work that did during the rest of the class. Students learn the theory behind the graphics tools and learn how to use various tools to create the visual images to communicate a brand that they are interested in. This presentation will cover how the course is taught in a hybrid format utilizing the Blackboard learning management system. Sample projects that students created will be shown as well.
Biography:
Yvonne Cao is a graphic designer, typographer, and educator. Yvonne Cao holds an MFA in Graphic Design from Louisiana State University. Cao received her BA in Mass Communication with a concentration in TV production from Human Normal University in China 2009. In 2007, she joined in an honors exchange program at Middle Tennessee State University where she studied Electronic Media Communication. Between degrees Cao worked as a creative director in Hunan Vision International Advertising Co. in China. During her studies at LSU, Cao worked as an graphic design instructor and an active graphic designer in GDSO (Design Office). In 2012, Yvonne Cao served as an Assistant professor of Graphic design at University of Mary Hardin-Baylor, where she taught Graphic Design I & II, Typography and Interactive Design. Yvonne Cao’s scholarly interests include cross-cultural design, branding, and typography, and the history of Asian art. In her most recent research “Visual Translation", she focuses on how to facilitate a smooth visual transition in western branding by using typography. To move beyond traditional type design, her work introduces an innovative methodology for designing typefaces using existing Latin typefaces; it is created as an educational tool, which seeks to help graphic design students and type enthusiasts, with emphasis on designers who are working for cross-cultural branding. Her professional graphic design work has received recognition from AIGA (American professional organization for design) and the American Advertising Federation.
Abstract:
The visual consistency of branding makes a significant difference when successfully introduced to another culture. My study focuses on how to facilitate a smooth visual transition in western branding from Latin letters to Chinese characters. To move beyond traditional Chinese type design, Visual Translation introduces a new method for designing Chinese typefaces using existing Latin typefaces. This web-based educational tool seeks to help Chinese graphic design students and type enthusiasts, with emphasis on designers who are working in a cross-cultural environment to maintain visual consistency for branding.
David Mesple
Rocky Mountain College of Art and Design
Title: “Haptic Real-time Interactive Animation, Sound, and…â€
Biography:
David Mesple’ is an American artist who exhibits around the world. His work has been profiled in texts, magazines, music CD’s, and public television presentations. He is one of the few contemporary artists to be honored with a 2-person exhibit with one of the Masters of Western Art- Rembrandt van Rijn. He is a non-dominant left-brained and right-brained artist, capable of linear, multi-linear, and non-linear thinking, and does not compartmentalize information, nor assert that knowledge resides exclusively within certain disciplines or domains. David believes that all information lies on a spectrum of immense complexity and diversity, and is available to all problem-solvers. Mesple’ is a Professor of Art and Ideation at Rocky Mountain College of Art and Design, and is working on his Interdisciplinary PhD in “Virtuosity Studies” combining Fine Arts, Neuroscience, Physics and Philosophy.
Abstract:
I intend to show a chronology from early attempts to develop successful real-time abstract audiovisual technologies by Oskar Fischinger (1900 – 1967) to present haptic interfaces which do everything he envisioned, and more. Fischinger’s pioneering efforts in image and sound generation led to his development of the Lumigraph, an instrument which could offer audiences a one-of-a-kind audiovisual experience. The immediacy of the aural and visual effects the Lumigraph offered would lead Fischinger away from formal animation and towards his ingeniously simple Lumigraph, making a little-known but monumental statement about the degree of subtlety and expressivity attainable with an analog audiovisual performance device. Fast forward to 1999 and the pioneering work of Golan Levin, explained in his MIT graduate thesis and in his TED Talk “Software as Art”, and we see the complete manifestation of Fischinger’s vision of “Absolute Film” through digital processes. Levin’s advancements completed and expanded Fischinger’s vision to generate sound and imagery, within abstracted processes in real-time, which could create an audiovisual representation via a direct haptic interfaces . expanding opportunities for manifestating sound, image, and physical form via graphic interfaces and output devices including, 2- and 3-D visualization, CNC production, 3-D printing, and other emerging technologies.
Raphael DiLuzio
University of Southern Maine, USA
Title: Broken Cinema: Creating Digital Art with an eye and hand in time
Biography:
Raphael DiLuzio, is an Artist, Professor, Serial Creative and Director of the Ci2 SRS (Creative Intelligence Innovation Collaboration Special Research Studio) at the University of Southern Maine. As a practicing artist he considers himself a visualizer who works with traditional painting, drawing, digital, time-based and interactive media. He makes artworks that are collected and exhibited internationally. For over 25 years he has maintained a deep interest and research in understanding how the Creative Process works. In 2013 he was awarded a National Science Foundation (NSF) Grant to develop his own proprietary methodology for teaching Creative Process Thinking to Science, Technology, Engineering and Mathematics (STEM) Professors, Students and Corporate Officers and other non-Art persons. This represents an on-going focus area of his research.
Abstract:
During the Renaissance, science and art were unified. Historically, a divide occurred that separated art from the technologies and innovations of science. For over five hundred years, the creative technology available to artists changed little until two historical innovations combined. Prior to the invention of film, the tools available to visual artist were limited to creating fixed images. The invention of film changed that, although the initial high cost and cumbersome nature of the medium required an army of workers and bankers to produce major works. Initially, the invention of the computer seemed to have little to do with art. Through advancements in its processing power, and the ability to work with sound and image, computers evolved to play a fundamental and influential role art in making. The combination and evolution of computer and film technology has led to the creation time-based art. The advancement in the computer as facile and accessible tool has enabled artists to create works that embody time as a formal element for expression and narrative structure. This has altered how the viewer perceives and understands meaning and narrative in visual art. It’s caused a shift in the viewer's perceptive eye from a "silent-eye" to an "eye-in-time." This eye is readily accustomed to recognize and accept time-based imagery and narratives that, in part or whole, contain combinations and instances of montage, superimposition, variability of frame rate, duration and non-linear sequence. This talk will describe the process, elements and formal principles of working with a computer to create time-based visual art. It will examine how the creative process differs when working with an art form distinguished by temporal qualities. The discussion also covers a definition of terms specific to the medium; its structural aspects; its relation and similarities to other mediums and how its narrative structure differs from conventional cinema.
Biography:
Zoila Maria Donneys, is Associate Professor of Visual Arts in Lone Star College- Texas. She has earned a Bachelor of Arts degree in Graphic Design from Belas Artes University in Colombia, South America with an emphasis on Advertising Design. She also holds a Master of Arts degree in Literature from Saint Louis University. She has worked as a Professor in Saint Louis University in Missouri for more than 12 years. She was employed for more than fifteen years as a general manager for B&C Advertising Company and as art department Director for El Pais Newspaper Company.
Abstract:
This presentation outlines different methods to design 3D images based on 2D elements using projective geometry with some of the latest computer programs. The use of 3D images can engage the human visualization of any particular project. However, this presentation will emphasize the use of 3D scenes in consumer industrial areas such as food, agriculture, heath care, cosmetics, and pharmaceuticals. The way I will approach this study and its effects on human object perception using 2D images and projects using 3D scenes will be presenting different students and personal Industrial advertising Projects. The audience will appreciate the different techniques and possible purposes to manipulate 2D images in 3D scenes for specific commercial purposes. This presentation\\\'s ultimate goal is to present commercial and human perception reasons as to why there is increasing demand of 3D projects in education and marketing. It will be achieved by manipulation of 2D into 3D images, and comparing students and historical commercial projects.
Anna Yankovskaya
Tomsk State University of Architecture and Building, Russia
Title: Cognitive tools based on n-simplexes for decision-making and its justification in intelligent systems
Biography:
Professor Anna Yankovskaya obtained her DSc in Computer Science from the Tomsk State University in Russia. She is currently a head of the Intelligent Systems Laboratory and a professor of the Applied Mathematics Department at Tomsk State University of Architecture and Building, a professor of the Computer Science Department at Tomsk State University, a professor of Tomsk State University of Control Systems and Radioelectronics and a professor Siberian State Medical University. She is the author of more than 600 publications and 6 monographies. Her scientific interests include mathematical foundations for test pattern recognition and theory of digital devices; artificial intelligence, intelligent systems, learning and testing systems, blended education and learning; logical tests, mixed diagnostic tests, cognitive graphics; advanced technology in education.
Abstract:
Cognitive tools based on n-simplexes for decision-making and their justifications in intelligent systems are given. Intelligent systems based on the matrix way of data and knowledge representation, and on test methods of pattern recognitions, and on methods fuzzy and threshold logics, decision-making and its justification using graphical tools including cognitive tools are suggested. The idea of n-simplex application, and the theorem for decision-making, and its justification for intelligent systems were proposed by author in 1990 year. The mathematical visualization of the object under investigation mapping in n-simplex is given. Application of cognitive graphics tools based on development of the 3-simplex into sets of 2 –simplexes for decision-making and its justification in intelligent systems is suggested. Three way of decomposition of 3-simplex into set of 2-simplex are given. All ways consist of four 2-simplexes. The suggested ways of visualization is invariance to problem areas, increase quality of decision-making and allow achieving better degree justification. In this year proposed 2-simplex prism as cognitive tools for decision-making and its justification in intelligent dynamic systems for different problem areas: medicine, education, biology, physics, psychology, ecology, bioecology, etc. 2-simplex prism contain 2-simplex which disposed in base of prism and geometrical sections. Implementation intelligent subsystem of cognitive tools for visualization, decision-making and its justification based on construction 3-simpex and 2-simplex prism in intelligent instrumental software (IIS) IMSLOG are reasonable. IIS IMSLOG proposed for construction of applied intelligent system for different problem and interdisciplinary areas. IIS IMSLOG was developed in the Laboratory of Intelligent Systems of Tomsk State University of Architecture and Building.
Masoud Akbarzadeh
Institute of Technology in Architecture
ETH Zurich
Switzerland
Title: On 3D Reciprocal Diagrams and the Equilibrium of Spatial Compression/Tension-only Structural Forms
Biography:
Masoud Akbarzadeh is currently a PhD candidate at the Institute of Technology in Architecture, ETH Zurich. He is developing 3D graphical methods of structural design using 3D reciprocal diagrams at Block Research Group, ITA, ETH Zurich. He holds a Master of Science in Architectural Studies, Design and Computation, (SMArchs), and a Master of Architectural Design (MArch) from Massachusetts Institute of Technology. Previous to MIT, he received a Master of Science in Earthquake Engineering and Dynamics of Structures from Iran University of Science and Technology. Masoud has received multiple international awards including the renowned SOM award for design and research in the field of Architecture for his MArch thesis in 2011.
Abstract:
Graphical methods of structural design use pure geometric methods for design and analysis of structural forms. Being used and developed by many researchers since 19th century, these methods known as graphic statics are based on the reciprocal relationship between the form and force diagrams formulated by Maxwell. This reciprocity provides an unprecedented control in design of funicular structural forms. However, the conventional methods of graphic statics are based on 2D reciprocal diagrams, and therefore, are quite limited in dealing with 3D structural forms. The idea of reciprocity in three dimensions was originally proposed by Rankine in 1864. Nevertheless, the lack of computational/representational tools at that time prevented its further development and application. This presentation is based on a novel research that proves and illustrates the three-dimensional reciprocity between the form and force diagrams 150 years from its original proposition. It shows that the design and analysis of complex, spatial funicular structural forms does not require sophisticated algebraic methods and can be achieved by pure geometric constructions. According to this research, the equilibrium of a 3D system of forces that is in pure compression/tension can be represented by a (group of) closed, convex polyhedral cell(s) with planar faces. This research clarifies the topological and geometrical relationships between the components of a system of forces (a polyhedral frame) and its reciprocal force diagram (polyhedron). Additionally, it provides a computational approach to construct a form diagram from a given group of convex force polyhedrons and vice versa. To further emphasize the potential of the application of this method in design, research, and practice, this presentation provides examples where manipulation of the force diagram results in generation of novel spatial structural forms. In conclusion, it shows how the reciprocity between the form and the force diagrams in three dimension can be used to extend existing 2D methods of graphic statics to 3D, and therefore, open a new horizon in the field of structural design, architecture, and computer science.
Tanzila Saba
Prince Sultan University in Riyadh, Saudi Arabia
Title: Innovative Technologies and Applications of Computer Graphics
Biography:
Dr. Tanzila Saba earned PhD in Document’s Information Security and Management from Faculty of Computing Universiti Teknologi Malaysia (UTM), Malaysia in 2012 and also won best student award in the Faculty of Computing UTM 2012. Currently, she is serving as Assistant Prof. in College of Computer and Information Sciences Prince Sultan University Riyadh KSA. Her research interests include intelligent data mining, forensic documents analysis & security. She is author of around 100 resrach articles and her more than thirty research papers are ISI/SCIE indexed. She is member of IEEE, TWAS & IFUW. Due to her excellent research achievement, she is included in Marquis Who’s Who (S & T) 2012.”
Abstract:
This presentation will highlight the current innovative technologies and applications in computer graphics. As we all are well aware that computer graphics has successful applications in all aspects of life. Currently it really proves a common proverb “ a picture is better than thousand words”. Its applications could be found on television, newspapers, all sorts of advertisements. Its particular applications are in weather forecasting, animations, animated movies, medical cure and treatment. A well-constructed graph is a successfully present complex data in such a manner that is simple to understand and interpret even for a common person. In the electronic and paper media, graphics are employed to provide comparison of achievements in all sectors of enterprises to attract new business. Currently, several innovative tools are available in the market to visualize data. This visualization could be categorized into several different types: two dimensional (2D), three dimensional (3D), and , however are high processer demanding. Therefore, 2D computer graphics are still acceptable and applicable. Computer graphics are emerged as a sub-area of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, and scientific visualization more concerned with "the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component". The most common applications of computer graphics are 3D projection, Ray tracing, Shading, Texture mapping, Anti-aliasing, Volume rendering, 3D modelling.
Biography:
Waqas Hussain has completed his Bachelor’s degree from Air University, Islamabad. His research paper managed to attain 3rd position in the The International Conference on Energy Systems and Policies (ICESP-2014). He developed 3D-Model for World Wide Fund an UK based organization. He presented his research papers in different international research based conferences (Turkey, Dubai and Pakistan). Currently, we have minimum one million hospitals all over the world but none of them have an application catering to sperm morphology. So, now, he is working on this idea using Computer Graphics and Computer Vision algorithms.
Abstract:
Currently, we have minimum one million hospitals all over the world but none of them have an application catering to sperm morphology. Microscopic evaluation of human sperm quality is a basic requirement of any diagnostic fertility service, assisted conception (IVF) centre or pathology laboratory. Human Sperm is evaluated in terms of three key features, namely concentration (sperm count), motility (sperm speed) and morphology (individual sperm shape). Conventional manual microscopic analysis of sperm samples is time consuming (1-2 hours) and lacks accuracy and reproducibility in many IVF centres. Motility and concentration are handled with variable degrees of efficiency but morphological or individual sperm health and abnormality detection is still missing from the automated software tools. The manual testing for morphology in labs according to World Health Organization (WHO) standards is labelled flawed by the andrology and fertility researchers due to the following problems: • A dye has to be injected in the immobilized sample so the microscope can pick the sperms heads well at 1000x magnifications using oil immersion. The dye may transform the natural morphological characteristics of the original cells. • Too much time has to be consumed as 1000x magnification means looking at only a couple of sperms per slide. And if we follow WHO standards at least 200 sperms have to be analysed, so a lot of images form the microscope have to be taken and processed. We propose a sophisticated solution which analyses the immobilized sperms at 200x magnification without injecting a dye in. This work is highly appreciated and recommended by many hospitals in UK and we have 2 recommendation letters from 2 major hospitals from the United Kingdom. The physical analysis of the sperm involves the study of: •Head •Mid Portion •Tail characteristics. Objectives: The following are the objectives of the present study: •To analyse sperms samples at lower magnifications to get more sperms in the image and save time •To write image processing and computer graphics algorithms to detect sperms without the injection of chemical dye which messes with their natural morphology. •Automatically divide the sperm in 3 major parts namely head , mid-piece and tail •Find the head abnormalities using machine learning techniques •Find the neck and tail defects using tracking algorithms and some structural analysis techniques mentioned in a Bioinformatics paper.
Sumedh Ojha
Rajasthan Technical University, India
Title: One on One: A Completely New Individual Cricketing Style and Game Design
Biography:
Sumedh Ojha is a student of Bachelor of Technology in Computer Science stream from Poornima Institute of Engineering Technology, Jaipur, which is affiliated to Rajasthan Technical University, Kota, Rajasthan
Abstract:
One on One - (A New Cricketing Style) is an upcoming completely new style of playing the game of cricket. It is a unique game design which is based upon a rule book, i.e., One On One – Rule Book. The rule book consists of 5 basic laws divided as (1) The Players, (2) The Game, (3) The Batting Order, (4) Miscellaneous, and (5) The Score, with a total of 15 rules in it. In this game, every single player plays for himself instead of a team. The idea of “One on One” presents cricket in an all new style. It is based upon, or rather an advanced version of the game of cricket that is played in the streets of India. It is a game of 12 individual players who compete for a single winning Title. The winner is selected on the basis of an all-round performance by an individual player in each of the four skills, i.e., batting, bowling, fielding and wicket keeping. It brings out the true talent of a cricket player as an all-rounder and increases the momentum of the game of cricket in the field of sports. The online version of the game shall have some amazing features as it shall have a capacity of involving 12 individual players in a single game. Creating a game offline is one thing, Crickalympics shall have the online features such as: • A user may create a game, join an existing game or can wait to be added as a participant in the game by the host user. One shall even be allowed to publish one’s highest scores and place bids on the highest scorers. • The bidding part is the most essential feature. The game is to be played in a tournament. The tournament shall consist of squads which shall be owned by the individual users. Yes, one shall have the option of creating one’s own squad as one wins the games. As a user keeps winning, the user shall be able to unlock more features for one’s self. • As the game is an individual game, the players shall have full control over the players in the game. One shall be provided with the features of controlling a player during batting, bowling, fielding and wicket keeping. These are essentially revolutionary features.
Bryan Jaycox
Founder at The Build Shop
Title: The Rise of 3D Printing and the Digital-Physical Reality
Biography:
Bryan Jaycox is the Founder and Owner of The Build Shop, LLC, a makerspace in the heart of downtown Los Angeles. The Build Shop was the first of its kind in the Los Angeles area to provide walk-in training and DIY fabrication through high-end commercial 3D Printers and Laser Cutters. The Build Shop provides opportunities for highly localized manufacturing that enables anything from prototyping and mass production for small businesses to simply creating a customized gift for your grandmother. Prior to this Bryan’s career spanned over a decade in 3D graphics, virtual reality, video games, and serious games, at companies such as the Mixed Reality Lab at USC’s Institute for Creative Technologies, The Virtual Reality Medical Center, and LucasArts.
Abstract:
For years we have seen the impact of the digitization of our world, the ability to take the information of a thing and manipulate it as fluidly as a sequence of numbers on a computer. Only now are we beginning to see the impact of this digitization come back full circle and begin to remake our physical world in the same way as it manipulates bytes in the computer. The dream of the Star Trek replicator is fast becoming a reality of our modern day world as the barriers of speed, cost, and material types are quickly being thrown aside to create a new era of 3D Printed goods that could upend our entire economic infrastructure for manufacturing and consuming goods. Imagine a world absent of retail stores, shipping services, and centralized manufacturing and you get a glimpse of the changes in-home 3d printing have the potential to bring to a future economy. From 3d printed food, to organs, to homes, to rocket engines see how this amazing technology is beginning to reshape our world in more ways than we can possibly imagine. Discover the state of the art technology of today, where the industry is heading, and the most promising advances happening today to improve speed, cost, and are rapidly making this technology viable as the “Build Anything” machine of the future.
- Prospects and Challenges of the Animation Industry
Session Introduction
Jan Nagel
Entertainment marketing professional
Title: Animation studios: Intellectual property development and production services
Biography:
Jan Nagel has worked as an entertainment marketing professional since 1991 with recognized and award-winning feature and television production studios, including Dream Quest Images, Calico Entertainment and Virtual Magic Animation. As a consultant and business owner, she brings her experience and expertise to clients such as Original Force 3D in Nanjing China, Maya Digital Studios in Mumbai, India, Santo Domingo Films in Mexico, Pandoodle in Silicon Valley, KOCCA USA (Korean Content and Cultural Agency), Anya in Bangkok, Rocket Fish in Malaysia, Rhythm & Hues and other animation and visual effects companies. Prior to her entertainment-marketing career, she worked for Fortune 500 advertising agencies, providing advertising and recruitment marketing for client, such as U.S. Army, Hughes Aircraft and Century 21 Real Estate, and served as associate publisher for four career magazines. She is currently an Adjunct Professor at University of Southern California, teaching the Business of Animation in the graduate animation program. As a Senior Lecturer at Otis College of Art + Design, she teaches a yearlong course in business and career development to digital media seniors. As an Adjunct Professor, she teaches an online course for the Academy of Entertainment and Technologies at Santa Monica College. She has presented at the International Forum on China Cultural Industry in Shanxi Province, China, KidScreen Summit, UCLA Extension for KOCCA USA and USC Master’s Program.
Abstract:
Part 1: Animation Production Services- The most important part of a design, animation or digital media studio is the service it can provide to outside producers. This will bring in the income to support the overhead and the labor to keep the doors open. Most studios have the talent pool and desire to express their own creativity in the way of developing original content. Providing production services could support these endeavors. A successful studio will be able to offer a myriad of services, from character and environment design, to storyboarding and layout. From modeling and rendering to the actually making the characters move. In this portion of the master class, we will review what international producers are seeking in a production services.
• Labor Pool: Skills and Talent • Genre Specialties: Game, Television Animation, Feature Film Animation, VFX, and more • CGI vs. 2D/Traditional: The Different Pipelines • Art Skills and Computer Skills
Part 2: Intellectual Property Development- Content creation is a business. Stories are created for television, feature film and home entertainment. Animation can be “evergreen” and remains successful for much longer than many live action productions. Animation can move from country to country, from language to language with every little effort. Animation is an expensive process, but animation is a method that allows stories to appeal to large and wide audiences. This means the potential for a return on investment is greater for an animated story.
• Business of Content Development • Acquisition vs. Development • Distribution: Broadcasters and Network vs. Distributor • Co-Production Partnerships • Co-Ventures • Treaties, Tax Incentives and Economic Value • Licensing and Merchandising
Tariq Alrimawi
University of Petra, Jordan
Title: The challenges that face the Arab Animation Cinema
Biography:
Tariq is a Jordanian animated film director. His first degree, which was obtained in 2006 from the University of Petra, Jordan, was in Graphic Design. In 2010, he graduated with a Master’s Degree in Animation from Newport Film School in the United Kingdom. His graduation stop-motion film entitled Missing has screened at more than 100 international film festivals including the Academy Award Qualifying Festivals, the Tokyo Short Shorts International Film Festival and the Chicago International Children's Film Festival. The film has also received 12 awards domestically and internationally. In 2014, Tariq completed his PhD studies about Arab Animation Cinema at The Animation Academy, Loughborough University in the United Kingdom. Currently, Tariq is an Assistant Professor at the Graphic Design Department at the University of Petra. To see more of his projects: www.tariqrimawi.com
Abstract:
Arab filmmakers attempt to export their animated films to an international market, and try to speak to other global cultures. They seek to build a bridge between the Arab world and the West through animated films which have been adapted from Arab and Islamic sources, but speak to the universal human condition. The relationship between Islam and the West, though, remains very complicated; the West looks at these projects and already has a perspective about them as religious and ideological propaganda, especially after 9/11, 2001. Thus, the majority of these Arabic animated films are rejected by the West because of concerns that these films represent the unwelcome principles of foreign cultures. Inherently, there is an Islamophobia about Islamic cultural products as soon as they come to the West; there is suspicion of them and extensive interrogation of them. Ironically, when Western artefacts are exported to Arab countries, though almost inherently at odds with Muslim ideology and Muslim politics, they sometimes find distribution and audiences. The consequences of this relationship between Arab countries and the West is not only ideological, however, and also concerned with the fact that Arab filmmakers and producers face economic challenges, and a number of Arab animation studios went out of business or stopped making more feature animated films due to the difficulties of reaching international marketplaces. Thus, the focus of contemporary Arab animation is mostly low budget projects distributed through YouTube and social media, which became the main platform for Arab animation artists to distribute their political works during the 'Arab Spring' in Tunisia, Egypt, Libya, Yemen, Syria and elsewhere in the Middle East since 2011.
Biography:
Slade is a self-taught 3D animator and motion graphics artist. He also is a guitarist in a national rock band and a recording engineer and producer. He works as a "one man" show and has the skill sets to take an idea, concept or storyboard idea and create every portion of the animation to the final delivery of the completed animated movie. He lives and works in Houston, TX and is currently the Animation Director for the Houston Museum of Natural Science (HMNS). He has worked in many different areas of the industrial, from the commercial, entertainment side to the industrial side. He is constantly pushing his boundaries and skill set in order to produce higher quality animations and speed to meet some of today's unrealistic client's needs. He has been able to deliver more than what has been expected and this has enabled him to get his work noticed throughout the industry. He continues to strive to improve everything that he does, throughout his life.
Abstract:
I have always been the type of person to do things my own way. If I buy a product, I have to customize it; I have to make it personal. I have to create on a daily basis. I have to feel productive. I love working in a team but it seems like I have always been lead back to doing projects from start to finish, by myself. This is very rewarding to be me but it can be far more time consuming without help. In the end, you become more valuable because you have the skillsets of multiple people and you are constantly pushing yourself to become an increasingly better artist and creative thinker. This again translates to value, which translates to dollars because, let's face it, we have to work to pay the bills. It is much more rewarding to be able to do what you love, on a daily basis and earn a nice living. This is what all I will be discussing about: The "one man (person)" show - finding your niche. It is an open talk about as: “How did I do this? How can you do this?” Unleash your Creativity at CG 2015