Day 2 :
- Computer Graphics Applications
Head of R&D CESNET, Czech Republic
Jiri Navrátil received his PhD in Computer Science from Czech Technical University at Prague in 1984. He worked for 30 years at Computing and Information Center of CTU in different positions linked with High Performance Computing a Communications. During his several sabbatical leaves he worked in Switzerland, Japan and USA in the field of networking. Since 2006 he started work for CESNET - Czech Education a cientific Network as leader of group supporting special research applications using high speed Internet. In last years he participated on several multimedia performances organized in frame of large international cooperation in different fields.
CESNET is a research organization with research focused on networking and Internet applications including video processing. CESNET also plays a role of the National Research and Educational Network (NREN) in the Czech Republic providing e-infrastructure (high-speed network, computing services and data storage facilities) to academic users in the country. The CESNET network is a part of the pan European network GEANT, which connects all academic networks in Europe and provides many links to Asia, Africa, South America and the US. It creates an ideal environment for collaboration in many directions of science, medicine and culture.
Over the years CESNET has developed two technologies that allow transmission of HD and UHD videos over a network - Ultra Grid as a software-based solution and MVTP as hardware-accelerated FPGA-based solution. Both technologies are widely used as technological tools in the events which needs high quality and low latency video. In the last several years we organized together with several partners (Music and Dance Faculty of the Academy of Performing Arts in Prague, Konic Theatre Barcelona, New World Symphony Miami), KISTI Korea, RDAM Royal Danish Academy of Music and APAN - Cultural Work Group several Cyber Performances (CP) as joint events in which participated artists from many countries. The main goal of such CPs is to demonstrate capability of the current Internet to arrange live collaboration and on-line interactions of performing artists (musicians, dancers, animators) across countries and continents using modern multimedia tools.
These CPs are not simple and cheap, they need long time planning and preparation, and finally narrow collaboration of many people from different fields (artists, technicians, networkers). This is a reason why they are usually organised only in the frame of very important IT globally significant events such as Supercomputing, APAN or Internet2 meetings, TNC conferences, Cinegrid workshops, GLIF meetings, etc. From the past we can remind successful CP “Dancing Beyond Time” on 36th APAN in Daejeon, South Korea and CP “Dancing in Space” on 37th APAN meeting in Taiwan or “Walking in historical Prague” on Internet2 Meeting in Honolulu, USA. The Network Performing Arts Production Workshop (NPAPWS) is an event connecting the creators, artists and technicians working in this field from around the world to present their projects and discuss ways to proceed in this area. CESNET has participated in several previous NPAPW workshops, with a distributed concert “Piano and Violin” in London 2015 and “Organs and Trumpet” in Miami 2016. Our current colleagues Ian Biscoe and Jana Bitter presented on NPAPWS 2016 an outdoor CP “Bridge to Everyone”.
In this conference CGA2017, we will describe our experiences from the last CP prepared for NPAPW 2017 in Copenhagen called “Similarities”. The story of the performance is the following: Performers are the guides on the journey through their locations. Dancers guide the audience through their location via movement, which is directly interacting with and interpreting various features of their respective location (shapes, forms, colours, structures of the place). Musicians are providing a unifying soundtrack for both dancers and ideally, also musically react on the dancers' movements. The guided journeys (local performances) are captured on video, and the eye of the camera provides the progression of the resulting performance; a real-time-made film for Copenhagen. As the eye of the camera is selective, it reveals the location to the audience only gradually. The journey goes from the micro world of details and very close-up video through to the full image of each location. In the beginning, the detailed shapes and forms of each location seem to be very similar without being specific to one location, and then as the camera zooms out during the course of the performance, the viewer begins to recognize more and more the specificity of the location. Performers communicate with other locations and performers (because they can see video from the other locations) by searching similarities, similar shapes, structures or forms.
Teams from CZ, US, ES and DK jointly participated in this event. The team included network engineers and researchers, audio-visual technicians, programmers, musicians, dancers, scene designers and choreographers, with some people spanning multiple areas. The event began simultaneously in Prague, Czech Republic (CZ), Barcelona, Spain (ES), US (Miami) and Copenhagen (DK). The music performance was captured by a 4K camera and delivered from NTK National Technical Library to Barcelona.
Founder, NWP Technology, UK
Nick Palfrey is Founder of NWP Technology, United Kingdom and also CEO of Real Visual Group, Plymouth, UK. He also possess research expertize on Virtual Reality and Interactive Media, Company Director and Consultant. An entrepreneurial, award winning and highly influential technologist, visionary leader and excellent board level director with a history of delivering pioneering projects in a broad spectrum of sectors such as defence, pharmaceutical, construction and manufacturing. Real Visual Group launched in 2011 with a vision for applying gaming technology to new markets, Real Visual delivers cutting edge, 3D, non-gaming applications across every platform for a broad spectrum of industries. Real Visual develops real time 3D simulations to create realistic training technology, or 'serious games', which can be published on a wide range of platforms. He has worked all over the world, providing solutions, increasing value and growing relationships. With experience in Australia, Asia, North America and Europe, he can assist buyers in becoming more intelligent with their requirements and support businesses with their sales and marketing ambitions.
Helwan University, Egypt
Mahmoud Abd Ellatif is an Associate Professor in Faculty of Computers and Information, Helwan University, Egypt.
The current approaches of e-learning systems face some challenges; the research community mentioned that the next generation of e-learning is e-learning ecosystem. E- Learning ecosystem has many advantages, in which, content must be designed for interaction, and learners create groups, interact and collaborate with each other's and with educators.
The E-learning ecosystem has some challenges; it needs to make the learning environment adapted according to various learners’ needs and preferences. E-learning ecosystem uses the teacher-student model, in which, the fixed learning pathway is fit for all learners.
E-learning ecosystem needs to merge the personalization's concept through adopting new technologies. Using Semantic web ontology and Semantic Web Rule Language for personalizing the learning environment plays a leading role to build smart e-learning ecosystem and enrich learning environment. The main points of my speaking include:
- E-learning ecosystem Layers
- The Semantic Relations Between Learning Styles Categories, Learning Objects, Learning Activities and Teaching Methods
- Semantic Decision Table to select the sutibale Learning Styles to match Learning Objects For Each learner..
- Semantic web ontology and Semantic Web Rule Language for personalizing the learning environment
Federal University of Bahia, Brazil
Alan Soares is a researcher at CGLab of the Federal University of Bahia - Brazil, and has a company that provides software development services and also he is a master’s student of computer science with focus on gesture recognition that began his journey in scientific initiation during graduation. In addition to being a researcher, he has excelled in the technology industry for his knowledge and experience in software development. He worked with simulated bipedal robots during the graduation period and as a result obtained important titles in competitions such as the LARC and the robotics world competition held by the Robocup Federation.
The recognition of dynamic gestures of hands using pure geometric 3D data in real-time is a challenge. RGB-D sensors simplified this task, giving an easy way to acquire 3D points and track them using the depth maps information. But use this collection of raw 3D points as a gesture representation in a classification process is prone to mismatches, since gestures of different people can vary in scale, location and velocity. In this paper we analyze how different techniques of simplification and regularization can provide more accurate representations of the gestures. Using Dynamic Time Warping (DTW) as the classification method, we show that the simplification and regularization steps can improve the recognition rate and also reduce the time of gesture recognition.
Jin Wang received a Bachelo’s degree in Software Engineering from Beijing University of Chemical Technology, Beijing, China, in 2012. She won the National Scholarship in 2010 and won the National Endeavor Fellowship in 2009. She received a master graduate in Computer Application Technology in Shijiazhuang Tiedao University in 2015. She published many papers including ISTP, EI and SCI and participate in the National Natural Science Fund Project. From 2015 she is in the school of software engineering, Department of information, Beijing University of Technology, completing her PhD. Her research interests are the Internet of things and software engineering and Embedded and image and video quality assessment.
Do IOT problem definition and research. Research on Internet of things, first research object, Re research alliance，Re study network. Objects are things in the Internet of things, Link is how objects connect to the network, Network is what this network is. Objective function is the key problem. Can start with simple and critical questions. Algorithm is the solution to the problem steps. What is the Internet of things, objects connected to the Internet is the Internet of things, cup networking, car networking. Things better than other networks, is composed of what objects, what composition, what nature, what innovation and superiority. Internet of things four key technologies are widely used, these four technologies are mainly RFID, WSN, M2M, as well as the integration of the two. RFID can be achieved using MATLAB, NS2, Android, WSN can use NS2, OMNET++ implementation, M2M can be developed using JAVA. Therefore, this paper focuses on the advantages of Internet of things than the internet. The Internet of things has no unified definition. Some people believes that the interconnection of RFID is the Internet of things, some think that a sensor network is the Internet of things, some think that M2M (machine to machine) is the Internet of things. Some people think make the Internet stretched and extended to any goods and goods is the Internet of things. The Internet of things not only meets the demands for information of goods’ networking, but also the current technology development’s push. And final the most important thing is the internet of things can boost the economy, so the investigation on the Internet of things is very important.
Demetra Englezou is an Lecturer at European University of Cyprus. She has a Master's degree in Computer Animation from Bournemouth University, UK in 2001, and B.A in Graphic Design from the University of the West of England – Bristol, Uk in 2000. She has produced 3d Animation projects for television adverts for Major worldwide companies. She has received the Pancyprian Award for the Logo design for the Office Of the Cyprus Telecommunication Controller Officer OCECPR 2004. She is a member of the International organization Art Tech Media, as well as an associate member of the international organization SMPTE (Society of Motion Picture and Television Engineers) from 2009.
Motion Vibes is an educational project that teaches motion design exclusively to deaf people with an artistic inclination. The Motion Vibes was born due to the need to teach effectively Motion Design to hearing-impaired students.
The objective of the Motion Vibes project has been to draw upon the pedagogical advancements of teaching music to the deaf and to reserve the method from creating music through visual components to creating motions graphics inspired by music. The project began with volunteer students from the European University of Cyprus. Some of the students were studying for Graphic Design degree (BA), while others study Computer Science or Educational courses. All the participating students were deaf or hard-of hearing. Each class session lasted around two and a half hours, the first forty minutes of which were dedicated to theory and to explaining the various procedures we would apply. A sign-language interpreter translated all the necessary information that the students needed to understand the main objective of the project. The rest of the class time was taken up by practical demonstrations and applications of the project. This study is just the beginning of an investigation that will provide many solutions to current creative problems and help us develop new pedagogical methods of teaching moving graphics to deaf students. Despite all obstacles to the process of teaching motion design to deaf students, the results were very satisfactory. The whole process was a great experience for both the students and the Lecturers. The students expressed their enthusiasm for the course and felt that they discovered a new path of expression, that of moving creation. They had also said that by the end of the course they had a better understanding of the relation between sound (through vibrations) and moving images. The world of the deaf is a world of incredible depth and surprises. The main finding of this project has been that motion design is a course that does indeed need special modifications in order to accommodate the learning needs of deaf and hard-of-hearing students, but not one that lies beyond their reach. The possibilities are tremendous, and with the rapid evolution of technological tools, new opportunities and tools for the exploration of motion graphics through visual and tactile aids, immerge every day. The next task in this project will be to create a narrative story that utilizes motion images and sound, by applying these same techniques. By the end of the workshop the students were able to put together a motions design film with various transformations and shapes following the musical pattern. The general concept was based on the creation of action and reaction according to the vibrations of the music. Interestingly enough, the animations created by the students expressed not only the emotions we communicated to them via the various images, but also reflected their own emotional state as well. In the end, the animated paintings were a combination of the feelings of their own inner world and of the influence of sound, and the different colors used in the film also bear evidence to this automate mechanism.
Amir Hossein Niknamfar received his B.S. and M.S. degrees both in Industrial Engineering at the Islamic Azad University, Qazvin Branch, Iran, in 2009 and 2013, respectively. He is recently Member of American Institute of Industrial and Systems Engineers, USA, and a referee for 08 journals.
“In real applications of hub networks, the travel times may vary due to traffic, climate conditions, and land or road type. To facilitate this difficulty, in this paper the travel times are assumed to be characterized by trapezoidal fuzzy variables to present a fuzzy capacitated single allocation p-hub center transportation (FCSApHCP) with uncertain information. The proposed FCSApHCP is redefined into its equivalent parametric integer nonlinear programming problem using credibility constraints. The aim is to determine the location of p capacitated hubs and the allocation of center nodes to them in order to minimize the maximum travel time in a hub-and-center network under uncertain environments. As the FCSApHCP is NP-hard, a novel approach called memories-based genetic algorithm (MGA) is developed to solve it. This algorithm utilizes two knowledge modules to gain a good and bad knowledge about hub locations and saves them in a good and bad hub memory, respectively. As there is no benchmark available to validate the results btained, a genetic algorithm with multi-parent crossover is designed to solve the problem as well. Then, the algorithms are tuned to solve the problem, based on which their performances are analyzed and compared statistically. Finally, the applicability of the proposed approach and the solution methodologies are demonstrated. Sensitivity analyses on the discount factor in the network and the memory sizes of the proposed MGA are conducted at the end to provide more insights.
- Poster Presentation
Sebha University, Libya
Ali Ukasha is Assocciate Professor in Animation/Illustration, Department of Electrical and Electronics Engineering at Sebha University, Libya. He published more than 33 papers at an international conferences and journals. He published one book on image processing in 2016.
The necessity of knowing the boundaries of the image is occupies the most important to researchers. With clear conrours, the doctor can easily diagnose the patient's condition. This is possible, but the challenge is whether we can do that for the medical image after it has been encrypted. The encryption algorithm used here is RSA algorithm (Rivest-Shamir-Adleman) which uses two-key encryption, one of them is secret. In this work we introduce a new idea to extract the contours from the encrypted image after converting them to spectral domain methods using Lifting Wavelet, Walsh, and Periodic Haar Piecewise-Linear Transforms. In the specrum image, the compression is done using zonal sampling method. To increase security, the Arnold transform will be applied to the encrypted image using privat keys. The contours extraction from the reconstructed medical image can performed using Canny edge detector. The comparison between those specral algorithms is performed in terms of mean square error, peak signal to noise ratio, compression ratio, and the contour points number which can be detected by the edge detector operator. The experiments results show that by this algorithm, the contour points can be easily detected from the transmitted encrypted medical image and is better using DCT transform. The compression ration using PHL transform is exceeds to 88.5391% with retained energy reached to 84.125%.