Scientific Program

Conference Series LLC Ltd invites all the participants across the globe to attend 5th International Conference and Expo on Computer Graphics & Animation Montreal | Quebec | Canada.

Day 1 :

Keynote Forum

Jos Stam

Adjunct Professor, University of Toronto, Canada

Keynote: Art of Fluid Animation

Time : 10:00-10:45

Computer Graphics & Animation 2018 International Conference Keynote Speaker Jos Stam photo
Biography:

Jos Stam was born in the Netherlands and educated in Geneva, Switzerland, where he received dual Bachelor degrees in Computer Science and Pure Mathematics. In 1989, he moved to Toronto where he completed his Masters and Ph.D. degrees in computer science. After that, he pursued postdoctoral studies as an ERCIM fellow at INRIA in France and at VTT in Finland. In 1997 he joined the Alias Seattle office as a researcher and stayed there until 2003 to relocate to Alias' main office in Toronto. He is now employed with Autodesk as a Senior Principal Research Scientist as part of Autodesk's acquisition of Alias in 2006. He is also affiliated with the University of Toronto as an Adjunct Professor in the Department of Computer Science. His research spans several areas of computer graphics: natural phenomena, physics-based simulation, rendering and surface modeling, especially subdivision surfaces.His latest creation is a unified dynamics solver called Nucleus, which is embedded in MAYA and has been used in many movies to create special effects.e has published papers in all of these areas in journals and at conferences, most notably at the annual SIGGRAPH conference. In 2005, he was awarded one of the most prestigious awards in Computer Graphics: the SIGGRAPH Computer Graphics Achievement Award. He also won two Technical Achievement Awards from the Academy of Motion Picture Arts and Sciences: in 2005 for his work on Subdivision Surfaces and in 2007 for his work on fluid dynamics. He was also featured in a January 2008 Wired magazine article.

Abstract:

In this talk, I present my work on fluid dynamics for the entertainment industry. The talk will introduce basic concepts of fluids and a brief history of computational fluid dynamics. Subsequently I will talk about my contributions of applying computational fluid dynamics to the entertainment industry like games and movies. I will also discuss our implementation of this technology into our MAYA animation software. In 2008 I received a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences ("tech Oscar") for this work. I will also mention my work on bringing fluid dynamics to mobile devices like the Pocket PC in 2001 and the iPhone in 2008. In 2010 we released FluidFX and MotionFX for iOS and MacOS. The talk will feature many live demonstrations and animations. The talk is basically a condensed version of my book "The Art of Fluid Animation".

Break:

Networking & Refreshment Break: 10:45-11:00

Keynote Forum

Paul Kruszewski

CEO & Founder, Wrnch Inc., Canada

Keynote: Using AI to create frictionless motion capture

Time : 11:00-11:45

Computer Graphics & Animation 2018 International Conference Keynote Speaker Paul Kruszewski photo
Biography:

Dr. Paul has been at the bleeding intersection of real-time AI and Computer Graphics since 2000 when he founded AI.implant to use AI (flocking behaviours and path finding) to create and simulate huge crowds of interacting autonomous characters. Customers included Disney and Lucas Film for visual effects; Bioware and EA for game development; and L3 and Lockheed Martin for military simulation.  AI.implant was acquired in 2005 by Presagis, the world’s leading developer of software tools for military simulation and training. In 2007, he founded GRIP to use AI (behaviour trees) to create high fidelity autonomous characters capable of rich and complex behaviours.  Customers included Bioware, Disney, EA and Eidos. GRIP was acquired in 2011 by Autodesk, the world’s leading developer of software tools for digital entertainment.  In 2014, he founded wrnch to use AI to make the world safer, healthier and more fun by turning ordinary cameras into frictionless motion capture systems. Customers include Softbank and NVIDIA

Abstract:

Motion Capture has revolutionized computer graphics by making 3D animation incredibly lifelike.  Not withstanding this enormous success, the prohibitly high set up and operating costs of traditional motion capture techniques, limits its use only the largest 3D content creators.  In this talk, we describe how deep learning can overcome these barriers and turn any cell phone camera into a production ready motion capture system.

Break:

Group Photo:10:45-11:00

  • Sessions: Computer Graphics | Computer Animation | Animation Industry | Modeling | Simulation | Game Design & Development | Gamification and Social Game Mechanics
Speaker

Chair

David Xu

Regent University, USA

Session Introduction

David Xu

Professor, Regent University, USA

Title: How to use maya dynamic hair system to model realistic hairstyles

Time : 12:00-12:35

Speaker
Biography:

Professor David Xu is tenure associate professor at Regent University, specializing in computer 3D animation and movie special effects. He got MFA Computer Graphics in 3D Animation from Pratt Institute in NY. He has served as a senior 3D animator in Sega, Japan; a senior CG special effector in Pacific Digital Image Inc., Hollywood; and as a professor of animation in several colleges and universities where he developed the 3D animation program and curriculum. He has been a committee member of the computer graphics organization Siggraph Electronic Theater, where he was recognized with an award for his work. In 2011, he published the book Mastering Maya: The Special Effects Handbook invited by Shanghai People's Fine Arts Publishing House.

Abstract:

Maya dynamic hair system is a collection of hair follicles, which control the attributes and curves associated with a particular hair clump, and how the hairs attach to a NURBS or polygonal surface.
In this presentation, Professor Xu will firstly demonstrate a realistic catman model with realistic hair he created. He will also demonstrate how to create NURBS curves, Paint Effects strokes and hair folicle, and how to use various attributes on a hair system for modifying the look and behavior of the hair. Finally, he will discuss how to use Maya dynamic hair system to model a realistic hairstyle, and how the visible result will be affected by both the hair follicles and hair system attributes.

Gregory Ducatel

Global Head of Software, Mill Film, Montreal, Canada

Title: Improvement on top of Pixar USD

Time : 12:35-13:10

Speaker
Biography:

Gregory Ducatel is Global Head of Software at Mill Film in Montreal, Canada. His excellent knowledge and understanding of processes to reduce the technical debt of the company to increase the value outputted by the team. He has worked or participated in many projects like Assassin's Creed, The Revenant, X-Men: Apocalypse, Independence Day 2, Civilization Beyond Earth, Mortal Kombat X, The Grey, Clash of the Titans, Red Cliff, The Day the Earth Stood Still, Laundry Warrior, Public Enemies, Mirror Mirror.

Abstract:

We will demonstrate the improvements made on top of Pixar USD and the Hydra render engine (including Optix from Nvidia) to support a VFX pipeline.

Break:

Lunch 13:10-14:00 @ Foyer

Irving Cruz-Matías

Professor, University of Monterrey, Mexico

Title: Characterization of pore space using a non-Hierarchical Decomposition Model

Time : 14:00-14:35

Speaker
Biography:

Irving A. Cruz-Matías is full time professor in the Computer Science Department at University of Monterrey, Mexico. He received the Ph.D. in Computing from the Polytechnic University of Catalonia, Barcelona, Spain in 2014. His research interests include modelling, analysis and visualization of 3D biomedical samples, digital image processing and in general, the application of computer graphics in the bioengineering field.

Abstract:

Bio-CAD and in-silico experimentation are getting a growing interest in biomedical applications, where scientific data coming from images of real samples are used to evaluate physical properties. In this sense, analyzing the pore-size distribution is a demanding task to help to interpret the characteristics of porous materials by partitioning it into its constituent pores. Pores are defined intuitively as local openings that can be interconnected by narrow apertures called throats that control a non-wetting phase invasion in a physical method. There are several approaches to characterize the pore space in terms of its constituent pores, several of them requiring prior computation of a skeleton. This paper presents a new approach to characterize the pore space, in terms of a pore-size distribution, which does not require the skeleton computation. Throats are identified using a new decomposition model that performs a 2D spatial partition of the object in a non-hierarchical sweep-based way consisting of a set of disjoint boxes. This approach enables the characterization of the pore space in terms of a pore-size distribution.

Xupu Geng, Tian Li

Senior Engineer, Xiamen University, China

Title: Stylistic mixture of Monet and Chinese ink painting by deep learning

Time : 14:35-15:10

Speaker
Biography:

Xupu Geng is Senior Engineer in State Key Laboratory of Marine Environmental Science, Xiamen University, China. His research interests are in the area of deep learning and its application in image processing.

 

Abstract:

Image style transfer is a classical problem in computer graphics and vision. As the palmy development of deep learning in recent years, Generative Adversarial Networks (GAN) and its variations like CycleGAN have been proposed to generate or transform images. Monet and Chinese Ink are two influential art styles in landscape painting. They have some likeness in impressionism, but concerning color and depth of focus, they are so different. Here we try to mix the two styles to create a new kind of artwork by CycleGAN. In fact the proposed method in this paper has many potential applications in artisitic creation.

Tian Li

Assistant Professor, Xiamen University, China

Title: Perception: Being art in virtual reality

Time : 15:10-15:45

Speaker
Biography:

Tian Li is Assistant Professor in College of Humanity, Xiamen University, China. Her research interests are in the area of virtual reality and new media art. Now she is focusing on the art theory of virtual reality, augmented and mixed reality.

Abstract:

With virtual reality (VR) technology, artwork becomes a process rather than a definite object, the perception of receiver could be the process of Art, and has been of unprecedented importance in art creation. It is VR that really gives receiver an identity of “creator”, and the completion of VR art becomes inseparably bound to perception of receiver, in so far, it could be woven as strands into an activity that calls the VR artwork into play. It is only through the process of receiver’s perception that the artwork could enter into its changing visions. During the whole process, perception of receiver may be everywhere, and VR becomes a psychological state to describe the perception of receiver happened in the process of art reception. On one side, full body immersive in VR broaden the esthetic perception in artwork, but on the other side, there will be lack of emotion and thought in some degree, so the conflict between full body immersion and imagination remains to be mediated in present VR art.

Break:

Networking & Refreshment Break: 15:45-16:00