Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 4th International Conference and Expo on Computer Graphics & Animation Berlin, Germany.

Submit your Abstract
or e-mail to

[email protected]
[email protected]
[email protected]

Day 2 :

Keynote Forum

Yonghuai Liu

Senior Lecturer, Aberystwyth University, UK

Keynote: 3D Shape Matching for Object Modelling
Computer Graphics 2017 International Conference Keynote Speaker Yonghuai Liu photo
Biography:

Yonghuai Liu is Senior Lecturer in Aberystwyth University. He completed his PhD (1993-1997) and PhD (1997-2000) respectively from Northwestern Polytechnical University, P. R. China and The University of Hull, UK.  In 1997, during his PhD, he received an Overseas Research Students (ORS) award.  He also Editorial board member of American Journal of Educational Research published by Science and Education: an open access and academic publisher from 2015 and associate editor in several journals. His research interests computer graphics, pattern recognition, visualization, robotics & automation, 3D imaging, analysis and its applications.

Abstract:

3D data can be easily captured nowadays using the latest laser scanners such as Microsoft Kinect. Since the scanners have limited field of view and one part of an object may occlude another, the captured data can only cover part of the object of interest and is usually described in the local scanner centred coordinate system. This means that multiple datasets have to be captured from different viewpoints. In order to fuse information in these datasets, they have to be registered into the same coordinate system for such applications as object modelling and animation. The purpose of scan registration is to estimate an underlying transformation so that one scan can be brought into the best possible alignment with another. To this end, various techniques have been proposed, in which the feature extraction and matching (FEM) is promising due to its wide applicability to different datasets subject to different sizes of overlap,  geometry, transformation, imaging noise, and clutters.  In this case, the established point matches usually include a large proportion of false ones.

3D data can be easily captured nowadays using the latest laser scanners such as Microsoft Kinect. Since the scanners have limited field of view and one part of an object may occlude another, the captured data can only cover part of the object of interest and is usually described in the local scanner centred coordinate system. This means that multiple datasets have to be captured from different viewpoints. In order to fuse information in these datasets, they have to be registered into the same coordinate system for such applications as object modelling and animation. The purpose of scan registration is to estimate an underlying transformation so that one scan can be brought into the best possible alignment with another. To this end, various techniques have been proposed, in which the feature extraction and matching (FEM) is promising due to its wide applicability to different datasets subject to different sizes of overlap,  geometry, transformation, imaging noise, and clutters.  In this case, the established point matches usually include a large proportion of false ones.

This talk will focus on how to estimate the reliability of such point matches from which the best possible underlying transformation will be estimated. To this end, I will first show some example 3D data captured by different scanners, from which some issues can be identified that the registration of multiple scans is challenging. Then I will review the main techniques in the literature. Inspired by the AdaBoost learning techniques, various novel algorithms will be proposed,  discussed and reviewed. These techniques are mainly based on the real and gentle AdaBoost respectively and include several steps: weight initialization, underlying transformation estimation in the weighted least squares sense, estimation of the average and variance of the errors of all the point matches, error normalization, and weight update and learning. Such steps are iterated until either the average error is small enough or the maximum number of iterations has been reached. Finally, the underlying transformation is re-estimated in the weighted least squares sense using the weights estimated.

I will thirdly validate the proposed algorithms using various datasets captured using Minolta Vivid 700, Technical Arts 100X, and Microsoft Kinect and show the experimental results. To show the robustness of the proposed techniques different FEM methods will also be considered for the establishment of the potential point matches: signature of histograms of orientations (SHOT) and unique shape context (USC), for example. Finally, I will conclude the talk and indicate some future work.I will thirdly validate the proposed algorithms using various datasets captured using Minolta Vivid 700, Technical Arts 100X, and Microsoft Kinect and show the experimental results. To show the robustness of the proposed techniques different FEM methods will also be considered for the establishment of the potential point matches: signature of histograms of orientations (SHOT) and unique shape context (USC), for example. Finally, I will conclude the talk and indicate some future work.

Keynote Forum

J. Joshua Thomas

Senior Lecturer, KDU Penang University College, Malaysia

Keynote: Visual analytics solution for scheduling processing phases
Computer Graphics 2017 International Conference Keynote Speaker J. Joshua Thomas photo
Biography:

J. Joshua Thomas is a senior lecturer at KDU Penang University College, Malaysia since 2008. He started in April 2002 as a lecturer at the same university college. He obtained his PhD (Intelligent Systems) in 2015 from University Sains Malaysia, Penang, and Master’s degree in 1999 from Madurai Kamaraj University, India. From July to September 2005, he worked as a research assistant at the Artificial Intelligence Lab in University Sains Malaysia. From March 2008 to March 2010, he worked as a research associate at the same University.  He is currently associate editor for the international journal: Intelligent Information Processing, an editorial board member for the Journal of Energy Optimization and Engineering (IJEOE), and invited guest editor for Journal of Visual Languages Communication (JVLC-Elsevier), Journal of Healthcare Engineering (JHE), and Information Visualization (SAGE). He is serving as a Special Session Chair (Optimization in Smart Data and Visualization) at the International Conference COMPSE2017 Thailand, and Workshop presenter at IVIC2017, Bangi Malaysia. He has invited as guest speaker for the public lecturers at SQL Saturday Malaysia 2015 and 2016 a training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics. He has been serving as programme committee member, external examiner, and referee for more than five international conferences. He has published more than 30 papers in leading international conference proceedings and peer reviewed journals.

Abstract:

Introducing Examviz, a novel tool designed for visualizing examination schedules and clashes at the initial level. Examviz uses a metaphor to visual analytics process (VAP) typically processed computationally with local search algorithm then visualized and interpreted by the user in order to perform problem solving with direct interactions between the primary data, processing and visualization. An integrated problem solving environment (PSE) that analyses the combined effect of user-driven steering with automatic tuning of algorithmic parameters based on constraintsand the criticality of the application for the simulations is proposed. It is important to allow the human timetabler to steer the ongoing simulation, especially in the case of critical clashes between conflicting courses to exams and to time slots. An integrated visual design Examvizwhich is based on the parallel coordinate’s style of visualization that uses a novel mapping of courses to exams and to time slots has been developed. Examviz has three processing phases which combines human factors and the algorithm to explore conflicting data through visualization particularly to provide incremental improvements over the solution.

  • Computer Graphics Applications

Session Introduction

Title:
Speaker
Biography:

" Speaker Slot Available "

Contact us Click Here

Abstract:

Speaker
Biography:

Jiri Navrátil received his PhD in Computer Science from Czech Technical University at Prague in 1984. He worked for 30 years at Computing and Information Center of CTU in different positions linked with High Performance Computing a Communications. During his several sabbatical leaves he worked in Switzerland, Japan and USA in the field of networking. Since 2006 he started work for CESNET - Czech Education a cientific Network as leader of group supporting special research applications using high speed Internet. In last years he participated on several multimedia performances organized in frame of large international cooperation in different fields.

Abstract:

CESNET is a research organization with research focused on networking and Internet applications including video processing. CESNET also plays a role of the National Research and Educational Network (NREN) in the Czech Republic providing e-infrastructure (high-speed network, computing services and data storage facilities) to academic users in the country. The CESNET network is a part of the pan European network GEANT, which connects all academic networks in Europe and provides many links to Asia, Africa, South America and the US. It creates an ideal environment for collaboration in many directions of science, medicine and culture.

Over the years CESNET has developed two technologies that allow transmission of HD and UHD videos over a network - Ultra Grid as a software-based solution and MVTP as hardware-accelerated FPGA-based solution. Both technologies are widely used as technological tools in the events which needs high quality and low latency video. In the last several years we organized together with several partners (Music and Dance Faculty of the Academy of Performing Arts in Prague, Konic Theatre Barcelona, New World Symphony Miami), KISTI Korea, RDAM Royal Danish Academy of Music and APAN - Cultural Work Group several Cyber Performances (CP) as joint events in which participated artists from many countries. The main goal of such CPs is to demonstrate capability of the current Internet to arrange live collaboration and on-line interactions of performing artists (musicians, dancers, animators) across countries and continents using modern multimedia tools.

These CPs are not simple and cheap, they need long time planning and preparation, and finally narrow collaboration of many people from different fields (artists, technicians, networkers). This is a reason why they are usually organised only in the frame of very important IT globally significant events such as Supercomputing, APAN or Internet2 meetings, TNC conferences, Cinegrid workshops, GLIF meetings, etc. From the past we can remind successful CP “Dancing Beyond Time” on 36th APAN in Daejeon, South Korea and CP “Dancing in Space” on 37th APAN meeting in Taiwan or “Walking in historical Prague” on Internet2 Meeting in Honolulu, USA. The Network Performing Arts Production Workshop (NPAPWS) is an event connecting the creators, artists and technicians working in this field from around the world to present their projects and discuss ways to proceed in this area. CESNET has participated in several previous NPAPW workshops, with a distributed concert “Piano and Violin” in London 2015 and “Organs and Trumpet” in Miami 2016. Our current colleagues Ian Biscoe and Jana Bitter presented on NPAPWS 2016 an outdoor CP “Bridge to Everyone”.

In this conference CGA2017, we will describe our experiences from the last CP prepared for NPAPW 2017 in Copenhagen called “Similarities”. The story of the performance is the following: Performers are the guides on the journey through their locations. Dancers guide the audience through their location via movement, which is directly interacting with and interpreting various features of their respective location (shapes, forms, colours, structures of the place). Musicians are providing a unifying soundtrack for both dancers and ideally, also musically react on the dancers' movements. The guided journeys (local performances) are captured on video, and the eye of the camera provides the progression of the resulting performance; a real-time-made film for Copenhagen. As the eye of the camera is selective, it reveals the location to the audience only gradually. The journey goes from the micro world of details and very close-up video through to the full image of each location. In the beginning, the detailed shapes and forms of each location seem to be very similar without being specific to one location, and then as the camera zooms out during the course of the performance, the viewer begins to recognize more and more the specificity of the location. Performers communicate with other locations and performers (because they can see video from the other locations) by searching similarities, similar shapes, structures or forms.

Teams from CZ, US, ES and DK jointly participated in this event. The team included network engineers and researchers, audio-visual technicians, programmers, musicians, dancers, scene designers and choreographers, with some people spanning multiple areas. The event began simultaneously in Prague, Czech Republic (CZ), Barcelona, Spain (ES), US (Miami) and Copenhagen (DK). The music performance was captured by a 4K camera and delivered from NTK National Technical Library to Barcelona.

Speaker
Biography:

Mohammad Ali Mirzaei recieved his PhD degree from École d'ingénieur de Arts Et Metier in Paris, France and serves as full-time reseacher at European Nuclear Tesearch Center (CERN) at ATLAS experiment in Geneva, Switzerland. His reseach interest includes Imae processing acceleration, Asociated Memory Chips, FPGA, ASIC and heterogenous systems.

Abstract:

Image processing have been effectively employed in a lot of engineering and research fields such as biology, medicine, robotics, unmanned areal and terrain mobile vehicles, simulators, military, media, live streaming, web-based applications and so on. Real-time image processing using different hardware architecture founds thousands of very interesting applications ranging from robotics to computer aided navigation and simulators. It seems this field of image processing is very demanding because the research outcome can be a semi-industrial prototype, which will be marketed with a little engineering work or directly can be used in industry. So far, a lot of hardware platforms have been developed for this purpose including platforms based on FPGA, GPU, GPCPU, Mixed GPU-FPGA or CPU-FPGA. Recently, a lot of efforts have been made to design ASICs for image processing needs and to combine these ASICs with abovementioned platforms to make high performance heterogeneous architecture. The aim is to accelerate image-processing algorithms beyond the current frontier of the technology.
In this talk, I am going to present you some of the very latest efforts in this regards.

Sheng-Ming Wang

Department of Interaction Design, National Taipei University of Technology, Taiwan

Title: Design thinking for developing a case-based reasoning emotional robot
Speaker
Biography:

Sheng Ming is an associate professor in human-computer interaction technology and service design, Department of Interaction Design at National Taipei University of Technology. He received his MS degree in Building and Planning, PhD degree in School of Computer Science, University of Leeds, UK, in 1998. He worked professionally with number of interdisciplinary integration project for smart interaction technology development and serious game development. He known for natural user interface technology development projects for future classroom, which were funded by the Ministry of Science and Technology, Taiwan.

Abstract:

Research has shown that if affective computing technology and machine learning mechanisms can be introduced to enhance interaction and feedback between interactive service robots (ISRs) and users. This study integrates the concept and method of design thinking, emotion-detection technology, and case-based reasoning (CBR) to simulate the service situation of an interview, and thus to develop a prototype emotion-sensing robot (ESR) system. The results of the experiment were then used to analyze the effectiveness of integrating corresponding technologies as well as the value, utility, and affordance of the developed system.

The empirical verification of this study begin with a pilot test to create a basic database based on a simulated case, and initial weights were assigned to each attributing factor. Then, the prototype system was tested using participants from various fields of expertise and backgrounds, and differences in interaction and feedback between participants and the system were analyzed. These differences were then introduced into the system as references to modify the weights of each attributing factor when testing with participants from different professional areas. Empirical results showed that the emotional responses of participants during the simulated interview were consistent with those hypothesized in the user journey map. The results also revealed that blink rate was a significant determinant of the perception of tension. The predictive power in detecting facial expressions, analysis of semantic emotions, and accuracy of keyword matching related to perception of tension appeared to differ significantly between participants from different fields of expertise and backgrounds. Therefore, assigning more weight to detection factors that correlate specifically with participant emotions helps to reveal the utility of the prototype of the ESR system. Despite meeting both user requirements and user-oriented design requirements, as well as demonstrating the affordance of the system in this study, further improvements can be made. Future studies are necessary to enrich the cases in the database of CBR system and establish a foundation of machine learning principles for ESRs.

Chieh-Ju Huang

Chienkuo Technology University, Taiwan

Title: The sound of geothermal: animation and board game design
Speaker
Biography:

Chieh-Ju Huang is Lecturer in Design, Department of Commercial Design at Chienkuo Technology University, Taiwan. She worked professionally with Service Design, Design Thinking, User Experience, and Educational Board Game Design. Now she is PhD Candidate in Doctoral Program in Design, College of Design, National Taipei University of Technology.

Abstract:

Using Chinese traditional story character “the god of fire” to design an animation describes the knowledge and mechanism of Geothermal power generation. The animation also show how people people can collaborate togeather to use renewable energy to solve the problem of community electricity shortage and the crisis of energy overuse. Beside the theory of Geothermal, the story teller will explain the development, operation and function from the energy of Geothermal. 

In addition, a board game is designed from “Design Thinking” and “POEMS” design tools.  When designing this board game, design thinking workshop was propused for investigating how and why the board game would be played. POEMS design tools support to the game rules from: People (users in this game), Objects (the objects in this game), Environment (the content and environment in this game), Messages (the knowledge from this game), and Service (the service and activities in this game).  It is for multiple players, and the cards contains path cards and tool cards for interacting with others. The board game based on the rules derived from the application of green energy. The players will learn how the Hydraulic, Wind power, Fire power, Nuclear power, and Geothermal energy work by playing this board game.

In a comprehensive way, this animation and board game for educational and energy usages are teaching the users about the knowledge of  power generation and environment protection.

Speaker
Biography:

Cagri Baris Kasap is an independent Assistant Professor in the process of changing work places. He is working in the fields of UX/UI and Interaction Design.

Abstract:

While on one level Rockstar Games' Grand Theft Auto series (GTA) is all kitschy, gratuitous violence for entertainment purposes, it is also a masterpiece of interactive design. Arguably, it presents one of the most sophisticated developments in commercial video gaming to render a highly traversable urban space, one in which a player performs actions with a tremendous degree of freedom and unscripted spontaneity. This accounts for its wild popularity in the gaming market. The best-selling video game in America in 2001, GTA Ilfs success was usurped only by the release of the game's next evolution, Grand Theft Auto: Vice City, which became the year's best seller in 2002. With the October 2004 release of Grand Theft Auto: San Andreas, likely the most anticipated game of the year, Rockstar has once again set the gaming world on fire with its latest sprawling work of twisted genius. Since its first version released in 1997, Grand Theft Auto as a game that fulfills the standards of being an ‘action-adventure’, ‘driving’, ‘role-playing’, ‘stealth’ and ‘racing’  game, all at once, had gone through several (seven) version diversions. In this paper, I will try to map out the similarities and differences between each version.

Speaker
Biography:

Mahmoud Abd Ellatif is an Associate Professor in Faculty of Computers and Information, Helwan University, Egypt.

Abstract:

The current approaches of e-learning systems face some challenges; the research community mentioned that the next generation of e-learning is e-learning ecosystem. E- Learning ecosystem has many advantages, in which, content must be designed for interaction, and learners create groups, interact and collaborate with each other's and with educators. The E-learning ecosystem has some challenges; it needs to make the learning environment adapted according to various learners’ needs and preferences. E-learning ecosystem uses the teacher-student model, in which, the fixed learning pathway is fit for all learners. E-learning ecosystem needs to merge the personalization's concept through adopting new technologies. Using Semantic web ontology and Semantic Web Rule Language for personalizing the learning environment plays a leading role to build smart e-learning ecosystem and enrich learning environment. 
The main points of my speaking include:

  1. E-learning ecosystem Layers
  2. The Semantic Relations Between Learning Styles Categories, Learning Objects, Learning Activities and Teaching Methods
  3. Semantic Decision Table to select the sutibale Learning Styles to match Learning Objects For Each learner..
  4. Semantic web ontology and Semantic Web Rule Language for personalizing the learning environment

Yi bin Hou, Jin Wang*

Beijing University of Technology, China

Title: Investigation on the internet of things
Speaker
Biography:

Yibin Hou graduated from Xi’an Jiaotong University, computer science department, with a master’s degree in engineering, graduated from the Netherlands EINDHOVEN university of technology department, received a doctor’s degree from the department of engineering. From 2002 to 2013 as vice President of Beijing university of technology. Beijing university of technology’s professor, doctoral supervisor, embedded computing, director of the institute, Beijing university of technology, deputy director of academic committee and secretary-general, Beijing Internet software and systems engineering technology research center director. His research interests is the Internet of things.

Jin Wang received a Bacher’s degree in Software Engineering from Beijing University of Chemical Technology, Beijing, China, in 2012. She won the National Scholarship in 2010 and won the National Endeavor Fellowship in 2009. She received a master graduate in Computer Application Technology in Shijiazhuang Tiedao University in 2015. She published many papers including ISTP, EI and SCI and participate in the National Natural Science Fund Project. From 2015 she is in the school of software engineering, Department of information, Beijing University of Technology, completing her PhD. Her research interests are the Internet of things and software engineering and Embedded and image and video quality assessment.

Abstract:

Do IOT problem definition and research. Research on Internet of things, first research object, Re research alliance,Re study network. Objects are things in the Internet of things, Link is how objects connect to the network, Network is what this network is. Objective function is the key problem. Can start with simple and critical questions. Algorithm is the solution to the problem steps. What is the Internet of things, objects connected to the Internet is the Internet of things, cup networking, car networking. Things better than other networks, is composed of what objects, what composition, what nature, what innovation and superiority. Internet of things four key technologies are widely used, these four technologies are mainly RFID, WSN, M2M, as well as the integration of the two. RFID can be achieved using MATLAB, NS2, Android, WSN can use NS2, OMNET++ implementation, M2M can be developed using JAVA. Therefore, this paper focuses on the advantages of Internet of things than the internet. The Internet of things has no unified definition. Some people believes that the interconnection of RFID is the Internet of things, some think that a sensor network is the Internet of things, some think that M2M (machine to machine) is the Internet of things. Some people think make the Internet stretched and extended to any goods and goods is the Internet of things. The Internet of things not only meets the demands for information of goods’ networking, but also the current technology development’s push. And final the most important thing is the internet of things can boost the economy, so the investigation on the Internet of things is very important. E-commerce, such as the Jingdong and Taobao's free trial center, has become a hot topic. Study on Shijiazhuang Taihe and EGO digital city as well as Beijing Baidu and Hui Hai and IBM and other well-known companies are Beyondsoft networking and the internet. The next step is the study of the JSP framework in the Internet of things, such as the SSH framework and the SSM framework. SSH refers to: spring + Struts + hibernate; and SSM refers to: spring + SpringMVC + MyBatis. People who have Internet of things are diligent, intelligent and smart. They are all kinds of situations. JAVA programming and recording and audio are also important directions for developament.

Results: Mother-daughter_qcif.yuv’s VQM and FOOTBALL.yuv’s PSNR and VQM and SRC13’s VQM and SRC22’s PSNR 1068kb and PSNR 1062kb respectively as shown in figure 1, figure 2, figure 3, figure 4, figure 5, figure 6.

Figure 1: Mother-daughter_qcif.yuv VQM
Figure 2: FOOTBALL.yuv PSNR
Figure 3: FOOTBALL.yuv VQM
Figure 4: SRC13 VQM

Figure 5: SRC22 VQM 1068kb
Figure 6: SRC22 VQM 1062kb
Figure 7: src13 MOS
Figure 8: src13 MOStu

 

Src13 MOS as shown in figure 7, src13 MOStu as shown in figure 8, SRC13 MOSd-MOSo as shown in figure 9, src13 wuxian MOS as shown in figure 10, SRC13 wuxian PSNR-MOS as shown in figure 11, src13 wuxian MOStu as shown in figure 12, src13 wuxian MOSd-MOSo as shown in figure 13.

 

Figure 9: SRC13 MOSd-MOSo
Figure 10: src13 wuxian MOS
Figure 11: SRC13 wuxian PSNR-MOS
Figure 12: src13 wuxian MOStu
Figure 13: src13 wuxian MOSd-MOSo

Notes/Comments:
This work was partially supported by the National Natural Science Foundation of China (No: 61203377, 60963011, 61162009), the Jiangxi Natural Science Foundation of China (No: 2009GZS0022), and the Special Research Foundation of Shijiazhuang Tiedao University (No: Z9901501, 20133007), Naval Logistics Project(CHJ13L012).
 

Speaker
Biography:

Chieh-Ju Huang is Lecturer in Design, Department of Commercial Design at Chienkuo Technology University, Taiwan. She worked professionally with Service Design, Design Thinking, User Experience, and Educational Board Game Design. Now she is PhD Candidate in Doctoral Program in Design, College of Design, National Taipei University of Technology

Abstract:

This project use "Methane Ice formation and mining techniques" as the theme to transfer their associated knowledge to general science education based on storytelling, scenario design, character design, interaction design and hologram projection technologies. There are two learning systems had been developed in this project. The first learning system is called "The Animation Learning System for Methane Ice Formation and Energy Transformation". The second learning system is called "The Hologram Projection Learning System for the Knowledge Kernel and Structure Recognition of Methane Ice". Two activities had been held to invite elementary school students and high school students to learn the science of Methane Ice by the two systems developed in this project. The evaluation results show that the usability of these two systems is very good for both elementary school and high school students. The result gets rid of the factors that the learning achievement of Methane Ice science learning been affected by unfriendly system design. Further learning achievement evaluation based on ARCS learning motivation model will be performed to show the affordance of the Methane Ice science learning mechanism proposed in this project.

Speaker
Biography:

Alan Soares is a researcher at CGLab of the Federal University of Bahia - Brazil, and has a company that provides software development services and also he is a master’s student of computer science with focus on gesture recognition that began his journey in scientific initiation during graduation. In addition to being a researcher, he has excelled in the technology industry for his knowledge and experience in software development. He worked with simulated bipedal robots during the graduation period and as a result obtained important titles in competitions such as the LARC and the robotics world competition held by the Robocup Federation.

Abstract:

The recognition of dynamic gestures of hands using pure geometric 3D data in real-time is a challenge. RGB-D sensors simplified this task, giving an easy way to acquire 3D points and track them using the depth maps information. But use this collection of raw 3D points as a gesture representation in a classification process is prone to mismatches, since gestures of different people can vary in scale, location and velocity. In this paper we analyze how different techniques of simplification and regularization can provide more accurate representations of the gestures. Using Dynamic Time Warping (DTW) as the classification method, we show that the simplification and regularization steps can improve the recognition rate and also reduce the time of gesture recognition.

Speaker
Biography:

Ogunlade Benjamin Ande is an Associate Professor of Graphics and Communication Design with the Department of Fine and Applied Arts, Ladoke Akintola University of Technology Ogbomosho, Oyo State, Nigeria. He holds BA in Graphic Design, MA in (Industrial Design) Graphics and PhD in (Industrial Design) Graphics from Ahmadu Bello University, Zaria. He was also the Head of Department. He served in various committees at both academic and administrative level in the University. He also supervised a large number of under-graduate and post-graduate graphic design students in and outside the University. He is a Member of relevant professional bodies like Association of African Industrial Designers (AAID), Society of Nigerian Artists, (SNA) and Advertising Practitioner Council of Nigeria (APCON). He has attended and presented papers at many symposia and conferences at both national and international levels with over thirty five publications in reputable journals (both local and international). He serves as an Editorial Board Member of repute and also an External Examiner to various tertiary institutions.

Abstract:

The bewildering pace of change in technology has had a polarizing effect on design education profession. Graphic design educators tend to cope in two ways, either by finding the least invasive ways to use technology without interfering with their standard mode of practice, or by embracing technology at every step and turn in new and innovative ways. The former does a disservice to graphic design students, but the latter is unsustainable. This research explores the sustainability of using computer technology in graphic design education and puts forward principles and guidelines to determine the most effective technology tools to use in the most sustainable situation. In this framework, the onus is put on graphic design students to complete active graphic design projects in and outside the studios. The resulting learning environment and the computer tools employed in various graphic design activities are investigated in this study. Data were collected via field notes, questionnaires, student interviews, researcher journal entries, and student reflections. The findings of this research indicate that a principled approach to the sustainable use of computer technology in graphic design education fosters a student-centered orientation which raises student motivation, reduces the affective filter and builds confidence without placing undue pressure on lecturer’s demonstration or on limited design resources.

  • Poster Presentation
Speaker
Biography:

Ali Ukasha is Assocciate Professor in Animation/Illustration, Department of Electrical and Electronics Engineering at Sebha University, Libya. He published more than 33 papers at an international conferences and journals. He published one book on image processing in 2016.

Abstract:

The necessity of knowing the boundaries of the image is occupies the most important to researchers. With clear conrours, the doctor can easily diagnose the patient's condition. This is possible, but the challenge is whether we can do that for the medical image after it has been encrypted. The encryption algorithm used here is RSA algorithm (Rivest-Shamir-Adleman) which uses two-key encryption, one of them is secret. In this work we introduce a new idea to extract the contours from the encrypted image after converting them to spectral domain methods using Lifting Wavelet, Walsh, and Periodic Haar Piecewise-Linear Transforms. In the specrum image, the compression is done using zonal sampling method. To increase security, the Arnold transform will be applied to the encrypted image using privat keys. The contours extraction from the reconstructed medical image can performed using Canny edge detector. The comparison between those specral algorithms is performed in terms of mean square error, peak signal to noise ratio, compression ratio, and the contour points number which can be detected by the edge detector operator. The experiments results show that by this algorithm, the contour points can be easily detected from the transmitted encrypted medical image and is better using DCT transform. The compression ration using PHL transform is exceeds to 88.5391% with retained energy reached to 84.125%.