Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th International Conference and Expo on Computer Graphics & Animation | Toronto | Ontario | Canada.

Day 1 :

Keynote Forum

John McIntosh

Visiting Professor &Vice-Dean, ShanghaiTech University, China

Keynote: Empathy and Emotion in Computer Animation

Time : 10:00 - 10:45

Conference Series Computer Graphics & Animation 2019 International Conference Keynote Speaker John McIntosh photo
Biography:

John Balfour McIntosh is a Visiting Professor and, Vice-Dean of School of Creativity and Art at ShanghaiTech University, China from October 2017. He earned his Master of Fine Arts at Yale University and, is a Fulbright Specialist, Board of Directors of Visual Effects Society NYC. 1998 -2017, he founded and served as chair of BFA Computer Art, Computer Animation and Visual Effects program at the School of Visual Arts NYC. During his 19 years in office, the BFA Computer Art program has developed into the largest full-time computer animation degree program in New York and has become an internationally recognized digital arts program.

Abstract:

The concept of human motivation in CG character verses that of a skilled method actor, speaks to a central issue of computer graphics. Namely, how do animators convey genuine emotion when there is no inherent emotional depth (from a CG character). In fact, the CG character is nothing but a wire-framed shell. The surface of the character is less than skin deep. There are no experiences or emotions for the character to draw from to create a genuine, emotive performance. The CG character is a puppet and the animator the puppeteer.

In computer graphics (as a technology dependent, creative medium) it is inherently difficult to portray genuine emotions. Yet when an outstanding animated performance is married to classic cinematic principles the emotive power of animation particularly in the synthesis of lighting, sound, gesture, expression and character performance can be genuine and powerful. Although, not yet equal to a brilliant live-action performance from an actor, in terms of the depth of emotion and the apparent spontaneity of a performance.

Break:

Networking & Refreshment Break: 10:45 -11:00

Keynote Forum

Koichi Noguchi

Producer / VFX Supervisor, Toei Animation, Japan

Keynote: Present Situation of CGI Animation in the Japanese Market

Time : 11:00 - 11:45

Conference Series Computer Graphics & Animation 2019 International Conference Keynote Speaker Koichi Noguchi photo
Biography:

Koichi Noguchi is a Producer and VFX Supervisor at Toei Animation, Japan. He also is studying as Postdoc at Nihon University, Japan. He is a leading producer, CGI Director and VFX supervisor, he has designed, directed animation and VFX sequences for USA and Japanese movies for over a hundred feature films, ‘Species (1995)’, ‘Godzilla: Final Wars (2004),'‘One Piece: Baron Omatsuri and the Secret Island (2005)’, ‘Digimon Savers (2006: TV Animation)’, and ‘Expelled from Paradise (2014)’ to name a few. Besides features, he has lent his expertise across various verticals: art installations, new media, and commercials.

Abstract:

Japanese CGI animation has only recently become popular in its domestic market. In 2014, topical movies such as “Rakuen Tsuiho: Expelled from Paradise” and “Stand by Me, Doraemon” were released, and TV series such asKnights of Sidonia” and “Ronia the Robber's Daughter“ were broadcast. The second decade of the 21st century sees more CGI animation films produced and released with an increasing number of their audience than the beginning of the century. Furthermore, CGI animators have improved their skills, which has led to high-quality CGI animation, appealing to the Japanese market. I analyze the trends of CGI animation in the Japanese market, focusing on “Rakuen Tsuiho: Expelled from Paradise” and “Kado:The Right Answer” that were produced by Toei Animation Co., Ltd.

Break:

Group Photo: 11:45 - 12:00

  • Computer Graphics & Animation
Location: Toronto
Speaker

Chair

James Parker

University of Calgary, Canada

Session Introduction

James Parker

Full Professor, University of Calgary, Canada

Title: Randomness and Generative Art

Time : 12:00 - 12:35

Speaker
Biography:

James Parker is Director of MinkHollow Media and Professor of Art Digital Media Laboratory, University of Calgary, Canada. He has degrees in Applied Mathematics, Computer Science (M.Sc.) and Informatics (Ph.D., Universiteit Ghent with greatest distinction, 1998). He has been a Full Professor of Computer Science, a professor of Drama, and a professor of Art in a 40-year career in academia. He has published over 170 technical articles on simulation, video games, computer vision, and artificial intelligence. He is also the author of 12 books, including the most recent one “Generative Art: Algorithms as Artistic Tool".

Abstract:

Art in general can be thought of as a stochastic process. No two drawings or paintings are exactly alike, and cannot be as long as humans are involved. Generative art, the defining of an artwork using an algorithm, can result in very precise duplications of artworks but this is rarely interesting. Art is a human activity and artworks are a means of communication between humans, even in the generative domain. Adding randomness to a generative work makes it seem more human, and often more interesting. How much randomness should there be? What is the context of the random features? Why is randomness interesting? These things will be discussed, along with some ideas on how to use randomness as a tool in creating artworks.

Speaker
Biography:

Tyler Ayres is Associate Professor in Animation, School of Media Arts and Studies at Ohio University, USA. He is an award winning animator and continues to work professionally. He known for being a finalist in the Nicktoons Animation Festival in 2008, as well as screening in many international children’s film festivals.

Abstract:

Stop-motion animation (SMA) is commonly associated with miniaturizations positioned frame-by-frame to create motion. I propose a new process, synthesizing digital 2D animation, digital 3D animation, and paper Modeling with components of traditional SMA. This technique allows artists to design their animations digitally, yet actualize them physically. A case study was done on an animated pumpkin character.

Using this technique, artwork is drawn and colored digitally in Illustrator. All moving parts of the face are assigned to individual layers, then imported into After Effects, where individual layers are then key-framed to create a cyclical animation of the facial expressions. This series of images is used as an animated texture on a 3D polygon. The animation adds emotion while timing of the cycle allows facial animation to be used as overlapping action on the 3D Polygon.

One key to my experimental process is to achieve optimal form without creating a complex polygon model. Low polygon spherical models are created in Maya and additional extrusions are added for character enhancement. A simple rig is inserted inside the polygon for bend and squash control. Each frame of the animated polygon is then exported from Maya and imported into Pepakura Designer 4 (used to unfold the polygon into a flattened 2D shape).

For my case study, I chose to create a 17-frame looped animation using this rig.
The digital format transitions to physical as pieces of the polygon are laid flat with cut lines, score lines and tabs for gluing added. Each flattened polygon is printed on 160lb card stock using a Mimaki UV Jfx200 flatbed printer. An Epilog Fusion 75-watt laser cutter is used to cut the 2D shapes, which were then folded to create a physical form. The final steps include traditional SMA techniques, creating a final project that was imagined in the digital realm but actualized in the physical realm.

This technique case study has set the stage for a project larger in scale, with more complex polygon arrangements. Plans are in the works for a full-scale humanoid walk cycle.

Break:

Panel Discussion
Lunch Break:
13:10 - 14:00

Speaker
Biography:

Takahiro Yanagi from OTSL Inc. Japan. In 1991, he joined DENSO research Laboratory, HONDA R&D, YAMAHA Research Center, SILVACO, Mitusi, ANSYS (former ANSOFT), Siemens Wireless module, PI research LABO, LLC, OTSL Inc. Currently, he is working for Highly precision sensor simulator development for Autonomous driving, and his concern is modeling, simulation from semiconductor level to Autonomous System level modeling for Autonomous driving era. His experiences are High Speed Analog/Mixed signal design and simulation, F1 EMS system development, Electric throttle control system development using GA and Neural networks, SDR (software defined Radio) Project for mobile phone and Base station, High Speed Broad casting device, system development using FM modulation, development for the world's first transient noise simulator using Mote Carlo, M2M Automotive project, Power Semiconductor modeling for SiC,  power system design and modeling EV/PHV, and The World's first millimeter-wave radar simulator, and LIDAR simulator, Camera simulator, Far-IR sensor simulator, Ultrasonic simulator which are called by COSMOsim Framework.

Abstract:

Recently Autonomous driving development speed up using various simulation, however there are few dedicated simulator tools for various sensor which detect, measure objects. And most simulators didn’t enable to faithfully reproduce actual Radar, LIDAR and the other sensor. COSMOsim is the only high accuracy sensor simulator, is developed by based on from radar electrical design to system level hardware/software. This presentation describes regarding what is genuine sensor simulation for autonomous driving simulation.

Ivan Braun

Founder, generated.photos & Icons8

Title: How generated.photos created 100k faces with machine learning

Time : 14:35 - 15:10

Speaker
Biography:

Ivan Braun is a founder generated.photos and Icons8. His recent project is a generated library of 100,000 faces. It has recently received coverage from major news publications such as ViceThe VergeFast Companyetc
 

Abstract:

We are building technology to create on-demand media. https://generated.ph
We have built an original machine learning dataset, and used StyleGAN (an amazing resource by NVIDIA) to construct a realistic set of 100,000 faces. Our dataset has been built by taking 29,000+ photos of 69 different models over the last 2 years in our studio. We took these photos in a controlled environment (similar lighting and post-processing) to make sure that each face had consistent high output quality. After shooting, we underwent labor-intensive tasks such as tagging and categorizing.

Siqi Cai

Royal Holloway, University of London, UK

Title: Interactive movie — from animation to interactive game
Speaker
Biography:

Siqi Cai is a Reader in Animation, Department of Art and Design at Royal Holloway University of London. He worked professionally with number of Reputed Animation and game development companies. He known for Story FutherProject.

Abstract:

In 1978, a new home video format called the LaserDisc, abbreviated LD was first commercially available for the storage and playback of movies as an alternative to videotape. Compared to other home video formats at the time, LD offered a higher image quality and resolution as well as a random access feature, which allowed the video to jump from one point to another for instant access.

1) Both these advantages allowed the LD technique to be adapted to video games.

2) Therefore, in the 1980s, a series of video games using LD as the game carrier were published, and often called “the interactive movie.”  Interactive movies are essentially video games that use a significant amount of pre-calculated image data as the primary content, focus on the cinematic narrative, and offer comparatively poor gameplay. The poor gameplay is due to infrequent interactivity, which limits players to interaction through only a few buttons to play the game. 

The development process of the interactive movie did not progress smoothly. Despite providing experimental gameplay and cinematic game experiences, the interactive movie eventually ushered in its decline at the end of the 1990s due to a poor reputation.

3) A few critical points for its failure were summarised by scholars who study video games, also known as ludologists, included a shortage of interactivity, limited branch plots, unsatisfied acting, and low video quality compared to a real movie.

4) To make matters worse, the early interactive movies were labelled as “low-interactivity games” by Chris Crawford, a computer game designer and ludologist, who was authoritative in the field of game interactivity research.

5) Most disadvantages that existed pointed to the design of the interactive movie with poor gameplay and massive usage of video clips. While it is true that the interactive movie, to some extent, was responsible for most of its failures,

6) there remain some people who continue to enjoy this type of game, and other interactive movies with related features, such as Heavy Rain (Sony Computer Entertainment, 2010), The Walking Dead (Telltale Games, 2012), and Minecraft: Story Mode (Telltale Games, 2015), which are still being published continuously. As Crawford concluded, most game designers wanted but could not make the most of the gameplay of the interactive movie due to many reasons.

7) On this basis, the reduced gameplay and the broad focus on the animated content of the interactive movies were not the key issues to blame its failures. 

Through further research, this thesis establishes a distinct understanding of the interactive movie, which helps confirm the idea that the interactive movie has the potential to transfer reduced gameplay and other specialities into accessibility, player’s thinking depth, and better overall interactivity quality. This dissertation analyses the interactive movie’s poor gameplay and its essence from a historical perspective. Professional terminologies, such as interactivity, interactivity frequency, process intensity, and data intensity are introduced and used to argue and validate the conclusion to offer a comprehensive understanding of this category of gameplay.

  • Video Presentation
Location: Toronto

Session Introduction

Sumit Gupta

Assistant Professor, Manipal University, Dubai, UAE

Title: Connecting computer graphics and media production through motion graphics

Time : 15:10 - 15:45

Speaker
Biography:

Sumit Gupta is Assistant Professor in Graphics & Multimedia, School of Media & Communication at Manipal University, Dubai, UAE. He have master’s degree in Multimedia Technology and a bachelor’s degree in Animation & Multimedia. He previously worked as Senior Lecturer with ISBAT University Uganda, Africa for 07 years & 04 years in India also respectively in Media, Production & Teaching, having a wide knowledge and experience in training & teaching media & multimedia students from more then 21 countries since 11 years.

Abstract:

“Computer Graphics” is commonly associated with media and multimedia applications. This paper provides a complete case study on the application of computer graphics in media production with reference from print, electronic and web media with creative approach of motion graphics. Technological revolution has unified the world & humanity and turned into what is known as a global village. Computer Graphics encapsulates creative media work from scratch to final creation by using digital technologies throughout to streamline the process of productions and enhanced creative artistic expression. The creativity in media finds its new edge in the current scenario of the digital world and virtual reality.
The most proactive role of computer graphics in media is the designing of motion graphics for television and news. In this several sets of shapes are choreographed together using a wide range of effects to produce compelling footage for television & web. The realistic images or video viewed and manipulated in digital media platform and computer simulations could not be created or supported without the enhanced capabilities of modern computer graphics. Graphic communication is the key area of study which bridges the gaps between computer graphics and media production.
One of the prime applications of computer graphics and multimedia is its used for digital entertainment purpose. Computer graphics techniques are used in making motion pictures, music videos and television shows. Sometimes the graphics scenes are displayed by this facility and sometimes the actors and live scenes. Graphics objects can be combined with the live actions and images. Processing techniques can be used to produce a transformation of one person or object into another.

Cagrı Baris Kasap

Kadir Has University, Turkey

Title: An iterative design process: Case of grand theft auto

Time : 15:45 - 16:20

Speaker
Biography:

Cagri Baris Kasap is an Assistant Professor at Department of Visual Communication Design at Kadir Has University, Istanbul/TURKEY. He is working in the fields of UX/UI and Interaction Design.

Abstract:

While on one level Rockstar Games' Grand Theft Auto series (GTA) is all kitschy, gratuitous violence for entertainment purposes, it is also a masterpiece of interactive design. Arguably, it presents one of the most sophisticated developments in commercial video gaming to render a highly traversable urban space, one in which a player performs actions with a tremendous degree of freedom and unscripted spontaneity. This accounts for its wild popularity in the gaming market. The best-selling video game in America in 2001, GTA Ilfs success was usurped only by the release of the game's next evolution, Grand Theft Auto: Vice City, which became the year's best seller in 2002. With the October 2004 release of Grand Theft Auto: San Andreas, likely the most anticipated game of the year, Rockstar has once again set the gaming world on fire with its latest sprawling work of twisted genius. Since its first version released in 1997, Grand Theft Auto as a game that fulfills the standards of being an ‘action-adventure’, ‘driving’, ‘role-playing’, ‘stealth’ and ‘racing’  game, all at once, had gone through several (seven) version diversions. In this paper, I will try to map out the similarities and differences between each version.

Break:

Networking & Refreshment Break: 16:20 - 16:35