Day 1 :
Research Director, CNR-IMATI-Genova, Italy
Michela Spagnuolo is Research Director at CNR-IMATI-GE, where she has been working since 11/07/2001. Her research interest include geometric and semantic modelling of 3D objects, approaches based on computational topology for the analysis of shapes, method for the evaluation of similarity at the structural and semantic level. On these research topics, she has co-supervised 6 PhD thesis (plus two ongoing) and several Laurea/Master degree thesis.
She authored more than 130 reviewed papers in scientific journals and international conferences, is associate editor of international journals in Computer Graphics (currently, The Visual Computer and Computers & Graphics). She actively works as chair of conferences and workshops, and she is member of the steering committee of Shape Modeling International and of the EG Workshops on 3D Object Retrieval. In 2014, she was nominated Fellow of the Eurographics Association.
Since 2005, she is responsible of the research unit of the CNR-IMATI identified as ICT.P10.009 Advanced techniques for the analysis and synthesis of 3D shapes; since 2007, she is also responsible of the research unit identified as INT.P02.008 / Modelling and analysis, tools of high-performance computing and grid computing for data and applications in bioinformatics, Interdept. Project on Bioinformatics (now within the CNR Flagship Project Interomics).
She has been working as scientific responsible fof several international and national projects.
Digital manipulation and analysis of tangible cultural objects has the potential to bring about a revolution in the way classification, stylistic analysis, or refitting of fragments is handled in the cultural heritage area. Similarity evaluation is underlying most of these challenges, as the ability to reason on the several and diverse artifact properties, which may relate to geometric attributes (e.g., spatial extent, aspect), to colorimetric properties (e.g., colour, texture), to specific traits that fragments exhibit (e.g., decorations), or to metadata documenting the artefacts. 3D modelling, processing and analysis are now mature enough to allow handling 3D digitized objects as if they were physical, and semantic models allow for a rich documentation of many different aspects of artefacts or assets of any complexity, as well as of contextual information about them. In this context, the talk will give an overview of issues and trends related to the analysis, presentation and documentation of digital cultural assets, with focus on the research challenges tackled within the EC project GRAVITATE: Re-unification, as the process of discovering parts of the same object held in different collections and evaluate if and how they could fit together; Re-assembly, which consists in digitally recreating an historical artefact by the set of its fragments; Re-association of objects, which allows researchers to look for new understanding and insights into the movement and links between different communities on the basis of similar artefacts found in different locations.
Professor, Westfälische Wilhelms-Universität Münster, Germany
Lars Linsen is a Full Professor (W3) of Computer Science at the Westfälische Wilhelms-Universität Münster, Germany, at the Institute of Computer Science. He is also an Adjunct Professor of Computational Science and Computer Science at the Department of Computer Science and Electrical Engineering of the Jacobs University, Bremen, Germany. He received his academic degrees from the Universität Karlsruhe (TH), Germany, including a diploma (M.Sc.) in Computer Science in 1997 and a PhD in Computer Science in 2001. He spent three years as a post-doctoral researcher and lecturer at the Institute for Data Analysis and Visualization (IDAV) and the Department of Computer Science of the University of California, Davis, U.S.A. He joined the Department of Mathematics and Computer Science of the Ernst-Moritz-Arndt-Universität Greifswald, Germany, as an assistant professor in 2004. In 2006, he joined Jacobs University as an associate professor and became a full professor in 2012. In 2017, he moved to his current affiliation, the Westfälische Wilhelms-Universität Münster, Germany. His research interests are mainly in the areas of data visualization or interactive visual data analysis and include certain topics in computer graphics and geometric modeling.
Mathematical models are used for the description and the understanding of phenomena in all sciences. Numerical simulations support the validation of the models and data assimilation purposes. For computer animations, spatio-temporal simulations are used to derive the appearance of natural phenomena. These simulations often depend on a number of simulation parameters and initial configurations. The selection of these parameters and configurations is often not exactly known or their impact is part of the underlying research tasks. Therefore, multiple simulation runs with varying parameter settings or ensembles of simulations with varying configurations are executed. The analysis of such simulation ensembles is complex, especially when each simulation run represents a four-dimensional spatio-temporal phenomenon. The amount of data of a simulation ensemble often adds up to hundreds of Gigabytes or even Terabytes. The analysis of such complex data is no longer possible without the use of computers. On the other hand, such an analysis typically requires the expertise of a human. For animations, the designer would need to find the simulation run with the desired appearance. As visual representations are intuitive and can be processed efficiently by humans, it is a suitable approach to combine visual representations and interaction mechanisms with automatic analysis steps.
In this talk, I will present novel visualization methods that allow for an interactive comparative analysis of such large and complex data stemming from spatio-temporal simulation ensembles.
President, Select Services Films, Inc., USA
Keynote: The future of media
Susan Johnston, known as a Media Futurist, is President of Select Services Films, Inc. an award winning production company which is also certified DBE, has a casting division and is Founder/Director of New Media Film Festival. As a kid, Susan was on the set of the 1st Great Gatsby where she met Robert Redford while her father was handling the antique cars. From there, she worked on every production she could garnering experience in every department of filmmaking. Her first film a 35mm color film noir short Room 32 which won two awards, received distribution and was requested by Spiderman 3 for their production team. Susan founded the critically acclaimed New Media Film Festival ® in 2009 to honor stories worth telling in the ever changing landscape of media, New Media. Legendary judges cull over the content for the annual festival in Los Angeles that offers screening, competition ($45k in awards) and distribution opportunities. Currently there are over 600 titles in their library. Johnston has a background in the traditional film and TV industry, but has also become known in recent years as a pioneering new media producer including Stan Lee’s Comikaze Expo panel for Independent Creators, Co-Producing Feature Film Dreams Awake & currently producing the Marvel Comic feature Prey: Origin of the Species. While the Industry was changing from standard def to HD Johnston produced the 1stseries for mobile, Mini-Bikers the 1st live stream talk show on HD with a Panasonic Varicam and tested the Panasonic DVX100 which led to some changes on the DVX100A and was on a committee to develop the SAG Internet contract with Pierre Debs of SAG. Currently a Professor Emeritus in New Media, on New Media steering committee for The Caucus, an advisory board member for SET Awards (Entertainment Industry Council), Board Member Computer & Animation Society and Miss America NY judge. In 2012 LinkedIn announced Susan Johnston was one of the top 10% profiles looked at out of 20MM. With over 80,000 on the monthly newsletter elist and over 2 million across social media, Susan Johnston has been touted as a Social Media expert and lent her expertise to Los Angeles Social Media Week, IFFS, Jackson Hole Science Conference, Moviola, A Brasov Romania conference & is proud to have spoken at such high level conferences as American Film Market, NAB & NATPE about new advancements in the social media/crowd funding space. In November 2016, Susan will keynote the 3rd Annual Computer & Animation Expo in Vegas. Winner of Best Women Owned Film & TV Production Company CA 2016. Prior to relocating to Los Angeles in 2000, Susan Johnston, a New England native, worked with the Providence & Rhode Island Film Commissions over 5 years to build the infrastructure used by the Farrelly brothers, as well as by director Michael Corrente, NBC's hit TV series Providence, and the New England Screenwriters Conference. She developed Context Media Studios International production capabilities as well as garnered funds before Senate Committee hearings and helped facilitate the 25% tax incentive for investors of films in Rhode Island.
Talk on: A case study will be shared about two different animation projects and their journey through option, development, packaging, producing, editing, and distribution. In addition, current animation trends, from Independent to Studio Level, hand drawn, digital, 3D 4K, 5D and all things in between and advancing.
Senior Lecturer, Aberystwyth University, UK
Yonghuai Liu is Senior Lecturer in Aberystwyth University. He completed his PhD (1993-1997) and PhD (1997-2000) respectively from Northwestern Polytechnical University, P. R. China and The University of Hull, UK. In 1997, during his PhD, he received an Overseas Research Students (ORS) award. He also Editorial board member of American Journal of Educational Research published by Science and Education: an open access and academic publisher from 2015 and associate editor in several journals. His research interests computer graphics, pattern recognition, visualization, robotics & automation, 3D imaging, analysis and its applications.
3D data can be easily captured nowadays using the latest laser scanners such as Microsoft Kinect. Since the scanners have limited field of view and one part of an object may occlude another, the captured data can only cover part of the object of interest and is usually described in the local scanner centred coordinate system. This means that multiple datasets have to be captured from different viewpoints. In order to fuse information in these datasets, they have to be registered into the same coordinate system for such applications as object modelling and animation. The purpose of scan registration is to estimate an underlying transformation so that one scan can be brought into the best possible alignment with another. To this end, various techniques have been proposed, in which the feature extraction and matching (FEM) is promising due to its wide applicability to different datasets subject to different sizes of overlap, geometry, transformation, imaging noise, and clutters. In this case, the established point matches usually include a large proportion of false ones.
This talk will focus on how to estimate the reliability of such point matches from which the best possible underlying transformation will be estimated. To this end, I will first show some example 3D data captured by different scanners, from which some issues can be identified that the registration of multiple scans is challenging. Then I will review the main techniques in the literature. Inspired by the AdaBoost learning techniques, various novel algorithms will be proposed, discussed and reviewed. These techniques are mainly based on the real and gentle AdaBoost respectively and include several steps: weight initialization, underlying transformation estimation in the weighted least squares sense, estimation of the average and variance of the errors of all the point matches, error normalization, and weight update and learning. Such steps are iterated until either the average error is small enough or the maximum number of iterations has been reached. Finally, the underlying transformation is re-estimated in the weighted least squares sense using the weights estimated.
I will thirdly validate the proposed algorithms using various datasets captured using Minolta Vivid 700, Technical Arts 100X, and Microsoft Kinect and show the experimental results. To show the robustness of the proposed techniques different FEM methods will also be considered for the establishment of the potential point matches: signature of histograms of orientations (SHOT) and unique shape context (USC), for example. Finally, I will conclude the talk and indicate some future work.
- Computer Graphics
Chair and Professor, University of Massachusetts Lowell, USA
Haim Levkowitz is the Chair of the Computer Science Department at the University of Massachusetts Lowell, in Lowell, MA, USA, where he has been a Faculty member since 1989. He was a twice-recipient of a US Fulbright Scholar Award to Brazil (August – December 2012 and August 2004 – January 2005). He was a Visiting Professor at ICMC — Instituto de Ciencias Matematicas e de Computacao (The Institute of Mathematics and Computer Sciences)—at the University of Sao Paul, Sao Carlos – SP, Brazil (August 2004 - August 2005; August 2012 to August 2013). He co-founded and was Co-Director of the Institute for Visualization and Perception Research (through 2012), and is now Director of the Human-Information Interaction Research Group. He is a world renowned authority on visualization, perception, color, and their application in data mining and information retrieval. He is the author
of “Color Theory and Modeling for Computer Graphics, Visualization, and Multimedia Applications” (Springer 1997) and co-editor of “Perceptual Issues in Visualization” (Springer 1995), as well as many papers in these subjects. He is also co-author/co-editor of "Writing Scientific Papers in English Successfully: Your Complete Roadmap," (E. Schuster, H. Levkowitz, and O.N. Oliveira Jr., eds., Paperback: ISBN: 978-8588533974; Kindle: ISBN: 8588533979, available now on Amazon.com: http://www.amazon.com/Writing-Scientific-Papers-English-Successfully/dp/8588533979). He has more than 44 years experience teaching and lecturing, and has taught many tutorials and short courses, in addition to regular academic courses. In addition to his academic career, Professor Levkowitz has had an active entrepreneurial career as Founder or Co-Founder, Chief Technology Officer, Scientific and Strategic Advisor, Director, and venture investor at a number of high-tech startups.
Imagine you wake up one morning and -- just like it happens to so many other people every day everywhere -- something in your body is not working the way it worked just the night before: your lower back (very common), your knee, your elbow, or many other possible parts of your body ache and cannot function the way they previously did. After the typical doctor appointment, MRI scan, specialist consultation, and the like, you are prescribed Physical Therapy (PT). Only problem is that your work and life schedule, and your geographic location make it impossible for you to attend any PT center. Or, you are a soldier in some far away field, suffering something similar, with no possible PT specialist within hundreds or thousands of miles.
You are, however, luckier. The following day, a special delivery drops at your location a package. In it is a cuff, like the ones you have seen many athletes put on a tender knee of elbow. Almost, this one has an array of hundreds of sensors. You slip it on that aching knee, bluetooth connect it to your phone and start the sets of exercises prescribed to you. As you start your exercises, someone comes on the line, correcting your movements, guiding you to the right exercise routine. She is your remote Physical Therapist. She can be located half way around the world, but your "smart" cuff gives her a live viewing how you are doing your exercises. But not just that, it also gives her indicator measurements that tell her how your motion ability is compared to a person that is similar to you in age, gender, body build, and many other measurements, but who do not suffer from any injury. Further, the camera that came in the package with the cuff is aimed at your face. It collects your facial expressions and analyses them to assess the level of pain you might experience as you are going through your exercises. Based on all those indicators, your PT might guide you towards different exercises, or just to increase or reduce how strenuous those exercises should be and much more.
No, this is not a science fiction movie. In this talk I will describe how with the help of big data and the latest machine learning technologies we are able to analyse your PT exercise data, and how computer graphics and visualization techniques provide your PT trainer a live viewing of what you are actually performing and how you are performing it.
Nanyang Technological University, Singapore
Biju Dhanapalan is Associate Professor in the School of Art, Design and Media at Nanyang Technological University, Singapore. He is a leading visual effects director; he has designed and directed animation and VFX sequences for Indian, English, French and Hollywood productions for over a hundred feature films, ‘3 Idiots’, ‘PK,' and ‘Neerja’ to name a few. Besides features, he has lent his expertise across various verticals: art installations, new media, and commercials. His transdisciplinary training - Engineering and Industrial Design - has led him to design and develop custom devices and gears and various filming equipment including 3D stereoscopy rigs.
Kathakali, one of India’s eight classical dance forms, is a highly stylized and opulent dance-drama that originated nearly five hundred years ago in a southern state of India. Kathakali performers draw from a vast dictionary of highly advanced and sophisticated movements, a repertoire of gestures, and expressions. Motion capture was employed to encapsulate the temporal, three-dimensional data of a chosen Kathakali performance, in the motion capture laboratory at the School of Art, Design and Media, Nanyang Technological University, Singapore.
The analysis of the acquired motion capture data of Kathakali has revealed several possibilities. The numerical nature of the data facilitates direct admission to mathematicians, scientists, and animators into the complex and diverse kinetics of classical dance. Thus, lending a deeper understanding and meaningful abstractions of kinetic art. This research has opened possibilities of developing digital tools for classical dance pedagogy. Also an integrated archival of classical dance, pivoting on 3D motion capture with video and audio recording along with other pertinent data, can also be undertaken. The derivatives of the temporal data are being employed to drive the key parameters of an abstract animation film by the author himself.
By archiving a piece of a five-hundred-year-old tradition, the speaker has arguably tapped into the tangible and intangible heritage of an ancient civilization. This experimental dialogue between classical art and technology serves as a platform for a meaningful collaboration between ancient cultural heritage and rapidly advancing technology.
Colorado School of Mines, USA
William Hoff is currently with the DAQRI Austria Research Center in Vienna. Prior to that, he was an Associate Professor in Computer Science at the Colorado School of Mines. His research interests include computer vision and pattern recognition, with applications to augmented reality, robotics, and interactive systems.
Sports analysis is a useful application of technology, providing value to athletes, coaches, and sports fans by producing quantitative evaluation of performance. To address this field in the context of men’s gymnastics, a team at the Colorado School of Mines (Brian Reily, Hao Zhang, and William Hoff) has developed a system that utilizes a Microsoft Kinect 2 camera to automatically
evaluate the performance of a gymnast on the pommel horse apparatus, specifically in regards to the consistency of the gymnast’s timing and body angle. The Kinect’s ability to determine the depth at each pixel provides information not available to typical sports analysis approaches based solely on
RGB data. Our approach consists of a three stage pipeline that automatically identifies a depth of interest, localizes the gymnast, detects when the gymnast is performing a certain routine, and finally provides an analysis of that routine. We demonstrate that each stage of the pipeline produces effective results: our depth of interest approach identifies the gymnast 97.8% of the time and
removes over 60% of extraneous data; our activity recognition approach is highly efficient and identifies ‘spinning’ by the gymnast with 93.8% accuracy; and our performance analysis method evaluates the gymnast’s timing with accuracy only limited by the frame rate of the Kinect. Additionally, we validate our system and the proposed methods with a real-world online application, used by actual gymnastics coaches and viewed as a highly effective training tool.
Jingtian Li is an individual 3D Character Artist and Animator, he is also Assistant Professor of 3D Animation & Game Design (http://www.uiw3d.com) in the School of Media & Design at the University of the Incarnate Word in San Antonio. He also have been working in a variety of animation studios like Beijing Daysview Digital Image Co, Passion Picture NYC. He holds an MFA in Computer Animation form School of Visual Arts in New York City, and also a BFA of Digital Media from China Central Academy of Fine Arts.
Character modelling is one of the most popular field in the 3D animation and game industry, most of the students consider being a character artist as their career, but few students can really overcome all the difficulties of character creation process and really reach the professional level. Anatomy is one of difficult skill we have to help the students to master, this presentation introduces a way of using big shapes and planes instead of complex muscle and their names to help students understand anatomy on the big scale, and gradually move on into smaller shapes and planes, exploring ways of helping students to learn to observe and understand shapes of any unknown kind and recreate them in 3D, train their mind to simplity complex objects into easy and manageable primary shapes, help them to control detail rather than being overwhelmed, eventually, train their eye to quickly recognize the characteristic of any shape and able to easily recreate it in 3D with out struggles of trying to figure out what is wrong blindly.
Adam Watkins is Professor and Coordinator of 3D Animation & Game Design in the School of Media & Design at the University of the Incarnate Word, USA. He has authored more than a dozen books and over one-hundred articles on 3D animation, modeling, and game design. He has been teaching at the unversity level for almost 20 years.
Anatomy is essential for any digital 3D modeler, character designer, animator, or rigger. Tackling human anatomy early in the education of these artists is critical to creating portfolio-worthy projects in their later classes. Unfortunately, these early classes are also tool-heavy and students are wading through scores of new technologies and techniques. At UIW, we have tackled this “chicken and the egg” problem by creating three parallel courses - Character Modeling, Figure Drawing and Anatomy for Animators - that students take at the same time during their second semester. This separates the two areas - anatomy and technical proficiency - into separate bit-sized chunks. However, success heavily depends on these two courses being tightly threaded together so the knowledge in each feeds into the other.
In this presentation, we will look at the justification, rationale, structure, and implementation of these three courses, the problems associated with separate courses, and how the courses can be effectively threaded together. Assignments will be shared along with examples of finished sculpts and models. Particular attention will be given to lessons learned in things that have not worked, tweaked methods, choices that have proven successful, and how we plan to move forward.
Sao Paulo State University, Brazil
Ines Aparecida Gasparotto Boaventura graduated at Mathematics from Sao Paulo State University, UNESP, Brazil, master’s at Computer Science and Computational Mathematics and PhD at Electrical Engineering from University of Sao Paulo (USP). She has experience in Computer Science, focusing on Graphical Processing (Graphics), and acting on the following subjects: Biometrics, Image Processing, and Computer Vision. She is a full-time professor and head of the Department at Department of Computer Science and Statistics at UNESP, campus of Sao Jose do Rio Preto, Sao Paulo, Brazil. In 2011-2012 she was a visiting researcher at PRIP Laboratory –CSE –Michigan State University.
Face recognition technology is a hot topic of research in the field of image processing and computer vision. Face feature has very high reference value in the identification, because it is easy to collect the characteristics. Face recognition technology is widely applied in many system related to information and public safety. In this work it is presented a face recognition algorithm based on a new version of Multi-Scale Local Mapped Pattern Method.
The Local Binary Pattern (LBP) and its extended forms, such as Mean Local Mapped Pattern (LMP) and Multi-Scale Local Binary Pattern (MSLBP), were developed with the purpose of analyzing textures in images. Such methods compare histograms generated by micropatterns extracted from textures. A micropattern may be understood as a structure formed by pixels and its respective gray levels capable of describing or representing a spatial context of some feature found in the image, such as borders, corners, texture and even more complex and abstract patterns, like those found in a face image. In the MSLBP, a histogram is built in each scale with the values generated by image patterns smoothened by the Gaussian filtering. The LMP technique consists of smoothening the image gray levels from the mapping made through a pre-defined function. For each image pixel, the mapping of the region is made on the basis of a specific region of its neighbors.
In the face features description problem, the LMP technique presented excellent results in considering the average of the locally mapped patterns, whereas the MSLBP, working in several scales, also reached higher performance compared with the original LBP. Thus, in this work we propose a new technique combining the LMP method and a new version of the MSLBP method, herein referred to as MSLMP (Multi-Scale Mean Local Mapped Pattern). The proposal of this new approach is to extract micropatterns and to attenuate noisy actions often occurring in digital images.
Therefore, in this talk we will present some results of the method applied on face image of some well known face Database, such as ESSEX, JAFE and ORL. The experiments have been carried out so far suggest that the presented technique provides detections with higher performance than the results presented in the state-of-the-art research in the specialized scientific literature. For the mentioned databases, the results have reached 100% of accuracy, using 7 scales of the proposed method.
Tomsk State University of Architecture and Building, Russia
Professor Anna Yankovskaya obtained her DSc in Computer Science from Tomsk State University in Russia. She is currently a head of Intelligent Systems Laboratory and a professor of the following universities: Tomsk State University of Architecture and Building, National Research Tomsk State University, Tomsk State University of Control Systems and Radioelectronics, National Research Tomsk Polytechnic University. She is an author of more than 650 publications and 7 monographies. Her scientific interests include mathematical foundations for test pattern recognition and theory of digital devices; artificial intelligence, intelligent systems, learning and testing systems, blended education and learning; logical tests, mixed diagnostic tests, cognitive graphics; advanced technology in education.
The idea of n-simplex application, and the theorem for decision-making, and its justification for intelligent systems were proposed by author in 1990 year. The mathematical visualization of the object under investigation mapping in n-simplex is given. For the first time 2-simplex prism was proposed by author and Yamshanov for decision-making and its justification within intelligent dynamic diagnostic systems in 2015 and within intelligent dynamic predictive systems in 2016. 2-simplex prism is a triangular prism which has identical equilateral triangles (2-simplices) in its bases. The height of the 2-simplex prism in intelligent dynamic systems corresponds to the dynamic process time interval under consideration. The results of each of the diagnostic, predictive decisions are shown in the form of points in 2-simplices disposed on cross-sections of 2-simplex prism. The height of 2-simplex prism is divided into a number of time intervals. The number of time intervals corresponds to the number of diagnostic or predictive decisions. The distance between two adjacent 2-simpleces is directly proportional to the time interval between adjacent 2-simpleces. For intelligent geoinformation systems the height corresponds to the distance from initial point to final destination. In this case the distance between two adjacent 2-simpleces corresponds to the distance between two points on a map. In this paper the application of 2-simplex prism cognitive graphic tool for a variety of problem areas for intelligent dynamic diagnostic and predictive systems. The problem areas are as follows: medicine, ecobiomedicine, ecology, geology, geoecology, emergency medicine and education. For the first time the use of 2-simplex prism is proposed for intelligent geoinformation systems. In the paper the mathematical basics of intelligent systems construction and the results of decision-making and its justification in intelligent system for organizational stress and depression diagnostic, and intelligent learning and testing systems in the field of discrete mathematics and power electronics are presented.
Lobachevsky Nizhny Novgorod State University, Russia
Petukhov Aleksandr Yurevich is the head of the laboratory "Modeling the socio-political processes" in Lobachevsky Nizhny Novgorod State University. He is also a head of several large research projects in the field of information influence on the mind human (supported by grants from the President of the Russian Federation, Russian Research Foundation, etc.).
The presentation will presents the basic principles of the Information Image Theory and a mathematical model developed using it. The hierarchy of information images in an individual mind, which determines hisher real and virtual activity, is considered. Algorithms for describing transfer and distortion of information images by individuals in the communication process are constructed. To corroborate the theory experimentally, the bilingual Stroop test is used. The results of the test are interpreted using the introduced theory, and are then compared with the results of computer modeling based on the theory. It is shown that Information Images can be used not only to explain a number of cognitive processes of the human mind, but also to predict their dynamics in a number of particular cases.
Leonel Antonio Toledo Díaz recieved his Ph.D from Instituto Tecnológico de Estudios Superiores de Monterrey Campus Estado de México in 2014, where he was a full-time professor from 2012 to 2014. He was an assistant professor and researcher and has devoted most of his research work to crowd simulation and visualization optimization. He has worked at the Barcelona Supercomputing Center using general purpose graphics processors for high performance graphics. His thesis work was in Level of detail used to create varied animated crowds. Currently he is a researcher at Barcelona Supercomputer Center.
The process of constructing cities and agent simulation is expanding as a research area in computer graphics and artificial intelligence. Developing environments with intelligent agents implies several challenges, for instance, rendering thousands of objects within a any given scene with geometric and topological variety is very complex and many computational resources such as memory and processing power are required.
A broad range of areas and applications such as games, movies or urban simulation requiere virtual 3D city models with detailed geometry, which poses several challenges. Cities are systems of high functional and visual complexity. To achieve this is necessary to implement level of detail techniques that reduce the workload from the system.
The main contributions of this research are the following: A system that allows to render thousands of props to create urban environments incorporating crowd simulation. This system reduces memory consumption to create populated virtual environments, and no matter how many elements are currently rendered at any given time, memory requirements do not exponentially increase. Everything that is displayed on the scene is configurable using XML specification files.
Applications for virtual city generation range from research and educational purpposes such as urban planning, and creating virtual environments for simulation, which goverments and civil engineers can benefit from, applications can be extended to traffic simulation or disaster route planning.
- Video Presentation
The Embassy of Peace, New Zealand
Joshua graduated as a Systems Engineer with emphasis in systems simulation, optimization and decision analysis in 1985. In Caracas-Venezuela, he was staff to the strategic planning Vice-President of Union Bank S.A.C.A. between 1986 and 1988. He became a consultant for the oil industry together with the firm Sercontec C.A. between the years 1988 and 1991. He was co-founder and Director of BDS (Banking Decision Services C.A.) in between 1991 and 1994. Between 1986 and 1993, he taught systems simulation, decision analysis, time series analysis, systems dynamics, system concepts and mathematical modeling at the Universidad Metropolitana (Metropolitan University) where he supervised seven (7) thesis projects, most of them simulation models and decision support systems. In 2009 he completed a Master’s thesis in Cognitive Neuroscience, by the name of “The Brain of Melchizedek” at Otago University in New Zealand. Since 2011, he travelled to different nations in his capacity of Ambassador of Peace, delivering seminars, TV interviews, radio talks and conferences to large audiences at universities, medical clubs and hospitals about the integration between Scientific Knowledge and Spiritual Wisdom. He has also been engaged in research in systems cognitive neuroscience since 2012, co-authoring several publications including work in brain dynamics, applied mathematics, systems modeling and philosophy concerning the understanding of human consciousness, the creation of knowledge and meaning and values based decision making. In 2015 Joshua led the research group at The Embassy of Peace in Whitianga, New Zealand for the International Synchronization Heart Rate Variability (HRV) Study conducted by the HeartMath Institute. Recently, he has also authored and co-authored several publications both in the Journal of Consciousness Exploration & Research and in the Scientific GOD Journal. Currently, he is in preparation for the completion of a PhD dissertation on matters related to human consciousness and the biophysics of brain dynamics.
This presentation is inspired by the work of Walter J. Freeman on brain field dynamics and its implications in the understanding of cognitive functions, intentional action and decision-making. The main purpose is to present a novel way of applying the art of encephalography. We have moved from the mere plotting of brain signals in the time domain to spatio-temporal frames that produce a brain dynamics movie with power to give us different visual patterns of behavior in various conditions based on experimental data produced by different stimuli. The methodology of brain movie making is briefly described to explain how large quantities of brain data images are processed to produce the movies which are then displayed in order to visually discriminate between different cognitive states, as well as the different stages of cognitive processes related to the cycle of creation of knowledge and meaning so vital for decision-making. It is proposed that careful observation of each of these movies will facilitate a learning process, in order to: (a) identify different structures and visual patterns where large-scale synchronizations and desynchronizations are observed together with the temporal evolution of the different stages of the hypothesized cycle of creation of knowledge and meaning and (b) facilitate the study of brain dynamics across different frequency bands with the aid of different indices like the Pragmatic Information index which is based on the instantaneous phase and the analytic amplitude. To summarize, the art of encephalography enhanced by brain dynamics movies allows us to identify brain patterns and events associated with different measurements across bands and the different stages of the cycle of creation of knowledge and meaning.
This work was accomplished by the research team at The Embassy of Peace in Whitianga, New Zealand, in close collaboration with Walter J. Freeman and Robert Kozma.