Research Abstracts — Colombo SIGCHI Research Showcase

Colombo SIGCHI Chapter
43 min readJul 2, 2021

--

It is our pleasure to introduce the Proceedings of the 1st Colombo SIGCHI Research Showcase 2021 organized by the Colombo ACM SIGCHI chapter. Due to the current pandemic situation, the event held online via the ZOOM platform on 26th, June 2021. Nearly 80+ audience was enlightened by the invited speech conducted by Dr. Eunice Sari — SIGCHI Chapter VP and Keynote address by Prof. K P Hewagamage — Director, University of Colombo, School of Computing (UCSC). The objectives of the event were to connect young aspiring researchers to a wider audience, share ongoing or conceptual research with a vibrant HCI community, invite critical feedback, suggestions for improvements, and collaborations. Prior to the showcase day, the organizing team announced an open call for research abstracts. This open call attracted 32 submissions from local universities and institutions. The submissions were rigorously reviewed by the program team handled by Chapter Chair Dr. Dilrukshi Gamage — Tokyo Tech, Chapter Treasurer Dr. Thilina Halloluwa- UCSC, Chapter General Secretary Dr. Shyam Reyal- SLIIT, and Research Showcase Organizing Committee Dr. Kalpani Manathunga- SLIIT. After the review cycle, 23 submissions were selected to list in the Colombo SIGCHI Medium posts. At the same time, out of these 23 selected abstracts, 12 presentations were given opportunities to present and to get online feedback with live Q & A on the showcase day. This post listed all 23 selected presentations with author lists, camera-ready abstracts, and a video of their presentation. Please reach out to the Colombo SIGCHI chapter or authors directly for any collaborations on interesting research projects.

Colombo SIGCHI Research Showcase 26th June 2021 which presented 12 papers — Recording of the event

The following sections contain the details of the 23 selected research abstracts.

SINGING TO SPOKEN VOICE: A MACHINE LEARNING BASED APPROACH

Devanshi Ganegoda — Sri Lanka Institute of Information Technology

Dr.Kalpani Manathunga — Sri Lanka Institute of Information Technology

Dr.Dharshana Kasthirrathne — Sri Lanka Institute of Information Technology

“Singing to Spoken Voice” is a machine learning driven research aiming at synthesizing a set of phonemes to reconstruct a speaking voice having in input recording of a singing excerpt. The major focus of this project is to model voices by experimenting with different machine and deep learning techniques. Ultimate goal is to reconstruct the speaking voices of great singers of the past, such as Italian great operatic tenor Enrico Caruso by extracting and analyzing his voice from operas and then applying his singing voice in text-speech context to synthesize back his spoken voice. Such techniques and results can be applied across different applications and for different purposes, such as museum exhibits, art installations and multimedia showcases. Many studies have distinguished singing voice from speaking voice[1][2], where systems[3][4] and algorithms[5][6] can be used to identify a singing voice from a speaking voice based on F0 and intensity. However, unique characteristics that clearly define the same singer/speaker voice is still an open topic under study[7]. Some of the techniques used in synthesizing speech and singing are very similar to each other. These include Formant Synthesis[8], Unit Selection Synthesis[9], Statistical Parametric Speech Synthesis (SPSS)[10] and now Neural Networks based synthesis[11]. Usage of machine learning algorithms and specially the usage of Generative Adversarial Networks (GAN) systems can be found in recent researches very often[12]. Experimental results[13] show that the proposed methods outperforms the conventional method in the naturalness of the synthesized singing voice. However, proper methodology has not been proposed to use these techniques and methodologies for the purpose of generating a singing voice from a spoken voice. Therefore, this research explores advanced measures in identifying a proper methodology by using GANs to generate the speech voice from the given singing voice in finding solutions to questions such as whether it is possible to generate an arbitrary human voice that exists in the form of singing by sampling from continuous space? Or in extracting acoustic features of a unique human singing voice from a song and reconstructing a speaking voice based on these features. For in extracting acoustic features is yet to be identified exactly. Or how to extract linguistic features such as character and phoneme alignments in a song and use them to generate spectrograms? Creating or cloning human voice have been used in commercial applications such as VOCALOID[14]. Which is widely considered a state of the art of singing synthesizers that address the procedure of voice conversion to singing voice. The concept is based on the modification of the speech of a source speaker in order to render it perceptually similar to that of a specific one. Still conversion of singing voice to spoken voice is not possible through these applications.
Proposed research will be looking towards an approach based on machine learning techniques in converting a singing voice to a spoken voice.
[1] Y. Ohishi, M. Goto, K. Itou, and K. Takeda, “Discrimination between singing and speaking voices,” 9th Eur. Conf. Speech Commun. Technol., no. January, pp. 1141–1144, 2005.
[2] D. H. Klatt, “Editor’s note: This is the ninety-sixth in a series of review and tutorial papers on various aspects of acoustics. Review of text-to-speech conversion for English,” J. Acoust. Soc. Am., vol. 82, no. 3, pp. 737–793, 1987, doi: 10.1121/1.395275.
[3] A. Mesaros, “Singing voice identification and lyrics transcription for music information retrieval invited paper,” 2013 7th Conf. Speech Technol. Hum. — Comput. Dialogue, SpeD 2013, 2013, doi: 10.1109/SpeD.2013.6682644.
[4] Z. Duan, H. Fang, B. Li, K. C. Sim, and Y. Wang, “The NUS sung and spoken lyrics corpus: A quantitative comparison of singing and speech,” 2013 Asia-Pacific Signal Inf. Process. Assoc. Annu. Summit Conf. APSIPA 2013, 2013, doi: 10.1109/APSIPA.2013.6694316.
[5] T. Saitou, M. Unoki, and M. Akagi, “Extraction of F0 dynamic characteristics and development of F0 control model in singing voice,” Proc. ICAD, pp. 0–3, 2002, [Online]. Available: http://dev.icad.org/websiteV2.0/Conferences/ICAD2002/proceedings/41_MasatoAkagi.pdf.
[6] T. Saitou, M. Goto, M. Unoki, and M. Akagi, “Speech-to-singing synthesis: Converting speaking voices to singing voices by controlling acoustic features unique to singing voices,” IEEE Work. Appl. Signal Process. to Audio Acoust., pp. 215–218, 2007, doi: 10.1109/ASPAA.2007.4393001.
[7] J. L. Flanagan, “Voices of Men and Machines,” J. Acoust. Soc. Am., vol. 51, no. 5A, pp. 1375–1387, 1972, doi: 10.1121/1.1912988.
[8] J. Trindade, F. Araujo, A. Klautau, and P. Batista, “A genetic algorithm with look-ahead mechanism to estimate formant synthesizer input parameters,” 2013 IEEE Congr. Evol. Comput. CEC 2013, pp. 3035–3042, 2013, doi: 10.1109/CEC.2013.6557939.
[9] J. Bonada, M. Umbert, and M. Blaauw, “Expressive singing synthesis based on unit selection for the singing synthesis challenge 2016,” Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, vol. 08–12-Sept. pp. 1230–1234, 2016, doi: 10.21437/Interspeech.2016–872. [10] Y. Ning, S. He, Z. Wu, C. Xing, and L. J. Zhang, “Review of deep learning based speech synthesis,” Appl. Sci., vol. 9, no. 19, pp. 1–16, 2019, doi: 10.3390/app9194050.
[11] J. Shen et al., “Natural TTS Synthesis by Conditioning Wavenet on MEL Spectrogram Predictions,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. — Proc., vol. 2018-April, pp. 4779–4783, 2018, doi: 10.1109/ICASSP.2018.8461368.
[12] Y. Hono, K. Hashimoto, K. Oura, Y. Nankaku, and K. Tokuda, “Singing Voice Synthesis Based on Generative Adversarial Networks,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing — Proceedings, vol. 2019-May. pp. 6955–6959, 2019, doi: 10.1109/ICASSP.2019.8683154.
[13] S. Aso et al., “SpeakBySinging: Converting singing voices to speaking voices while retaining voice timbre,” 13th Int. Conf. Digit. Audio Eff. DAFx 2010 Proc., no. October, 2010.
[14] P. Chandna, M. Blaauw, J. Bonada, and E. Gómez, “WGansing: A multi-voice singing voice synthesizer based on the Wasserstein-Gan,” Eur. Signal Process. Conf., vol. 2019-Septe, 2019, doi:10.23919/EUSIPCO.2019.8903099.

VIDEO PRESENTATION

SINGING TO SPOKEN VOICE: A MACHINE LEARNING BASED APPROACH- Author profiles

Contactless parcel delivery to apartment balconies using Unmanned Aerial Vehicles (UAVs)

Nisal De Silva — University of Colombo School of Computing

Chethani Wijesekara — University of Colombo School of Computing

Nipuni Yapa Rupasinghe — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Dr. Kasun Karunanayaka — University of Colombo School of Computing

Dr. Manjusri Wickramasinghe — University of Colombo School of Computing

The “Last Mile Problem” in Supply Chain is referred to the inefficient final step of the delivery process where the goods and services are delivered to the end-user. This last portion is highly expensive and time-consuming when deliveries are happening using the traditional methods such as delivery trucks or vans, especially in urban environments where there are poor road infrastructure and high traffic congestion. Researchers are now working on finding a solution to this problem using different approaches and one such approach is using Unmanned Aerial Vehicles (UAVs) for parcel deliveries. Well-reputed companies such as Amazon, Google and UPS have ongoing pilot drone delivery projects but only with a scope of delivering parcels to few selected homes in rural areas with gardens that have ample space to lower the UAV safely. Few reasons behind limiting these projects only to rural areas can be identified as safety and privacy reasons in the highly-dense urban environment. Due to the high population density in urban areas, there are a huge number of apartment buildings and complexes and some with a large number of floors. Usually, the apartments in these buildings do not have a private garden but mostly there is a balcony. In our research, we plan to discuss how we can use UAVs for delivering parcels to balconies in apartment buildings.
GPS signals are mostly inaccurate in urban areas due to many high rise obstacles. Therefore, excessive dependence on GPS navigation has been identified as one of the major challenges of operating UAV related projects in urban areas. As our main research focus, we plan to discuss a UAV navigation method that does not solely rely on GPS signals. In the proposed method, initially, we plan to create a model of the building to locate each apartment balcony separately. With such a model, we hope that we can generalize our system to any apartment building without limiting it to a given building. Then, when the balcony number is given, the UAV should be able to identify the destination point by using the previously created model and navigate between the given starting point and the identified destination point, with proper trajectory planning. Once the UAV reaches the balcony, it should be able to maintain a proper distance and the angle with the balcony to drop the parcel safely without harming the occupants or damaging the property. In our research, we plan to analyze the available solutions for the above-mentioned scenarios to identify the most suitable approaches in our context and implement an autonomous UAV system that will be able to fulfil all of these requirements. Our solution will be a facility that comes with the apartment building itself and it requires some infrastructure changes of the building. Therefore we see the apartment building of our interest in a futuristic perspective where there will be balconies with a built-in area for drone deliveries.

VIDEO PRESENTATION

Contactless parcel delivery to apartment balconies using Unmanned Aerial Vehicles (UAVs)- Author profiles

Modelling Pedestrian Reactiveness and Navigation for Crowd Simulation

Dhanushka Samarasinghe — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Dr. Kasun Karunanayaka — University of Colombo School of Computing

Dr. Manjusri Wickramasinghe — University of Colombo School of Computing

Human behaviour is often chaotic and unpredictable. Which is the reason why modelling human behaviour is a difficult task. For the past few decades, scientists have tried different ways to achieve this goal through methods like studying animals herds and how they move or trying to match crowd behaviour with the flow of gasses and liquids. But still, it’s very difficult to match the exact unpredictable nature of humans. However, when it comes to driving simulators these models are widely used to model pedestrian behaviour. Modelling pedestrian behaviour is a vital part of driving simulators because it gives drivers a real experience of what it’s like to deal with pedestrians. But a problem occurs when using these simulators to train drivers in third world countries. Because these models are made to simulate pedestrian behaviours in western countries. But pedestrians in third world countries behave very differently when compared to western countries. Therefore it is a necessity to localize these simulations in order for drivers to gain experience as to what it’s like to drive in a country like Sri Lanka. The purpose of this research is to fill the above-mentioned gap and build a simulation model to replicate the behavioural patterns of pedestrians in third world countries. This model will be useful in many areas such as running real-time traffic simulations or as mentioned before, training drivers using a localized driving simulator.

VIDEO PRESENTATION

Modelling Pedestrian Reactiveness and Navigation for Crowd Simulation — Author Profiles

Kansei Engineering Research on Designing a Smart Jacket for Bike Riders

EAS Silva — General Sir John Kotelawala Defence University

WKNC Perera — General Sir John Kotelawala Defence University

AVN Sandamini General Sir John Kotelawala Defence University

GKVL Udayananda — General Sir John Kotelawala Defence University

JKSL Jayakody — General Sir John Kotelawala Defence University

DMAC Pivithuru — General Sir John Kotelawala Defence University

MGMUV Chandrasena — General Sir John Kotelawala Defence University

Dr. LP Kalansooriya — General Sir John Kotelawala Defence University

Background: Road accidents have become a severe problem in the current society. Motor bike users have mostly become the victims of these accidents as passengers are exposed to the environment directly as there is no shield to guard them in a two-wheel vehicle. On the other hand motor vehicles contain airbags as a safety tool for passengers in addition to their outer body. Even though there are existing jackets, they don’t have specific methods or features to reduce the impacts that occur due to accidents. Most of the existing jackets are commonly used by people basically to get protected by extreme weather conditions. There are also automatic air-filling ones, which are designed for specific systems and not commonly in use. They only fill it with air when the bike and the jacket get detached when an accident occurs. In here no safety precautions are taken to reduce the impact of the damages before the accident takes place. Aim: This paper proposes a smart jacket for the motor bikes using Kansei Engineering process to detect and take precautions when an accident is about to take place measurements. Methodology: In here we use Kansei Engineering methodology in order to create a user-oriented product. Therefore we collected a set of Kansei words by reviewing the existing researches. Then these words were distributed among 60 participants to rate them according to a 5-point scale. According to their ratings we identified the major factors that should be considered and then identified the design features to address those factors. These features were again distributed among those participants to gather their responses about the functionalities to be included when designing the product. Results: According to the surveys conducted we identified Air-Bag integration, GPS sensor integration, speed sensors, nearby vehicle detection and voice alert as the major features to be included in the smart jacket. Also we identified some additional features such as resistance to tear and dust, waterproof, integration of music controller, Bluetooth, neck pillow and phone charging port. It seemed that users preferred the smart jacket to be a fully covered, leather black color one. Conclusion: Through the obtained results the smart jacket was designed according to the user preferences by integrating special features and functionalities. Since this is a user-oriented design, this can be used by common civilians for their day-to-day usage.

Key words: Smart jacket, safety measurements, Kansei Engineering

VIDEO PRESENTATION

Kansei Engineering Research on Designing a Smart Jacket for Bike Riders — Author Profiles

A Study on the Development of Mental Stress Relieving Affective Application with aegis of Kansei Engineering Methodology

JMDT Nipul — General Sir John Kotelawala Defence University

RMPB Rathnayaka — General Sir John Kotelawala Defence University

DTN Senanayake — General Sir John Kotelawala Defence University

MASD Munasinghe — General Sir John Kotelawala Defence University

PVRCD Asmadala — General Sir John Kotelawala Defence University

JASI Jayasekara — General Sir John Kotelawala Defence University

Dr. Pradeep Kalansooriya — General Sir John Kotelawala Defence University

Ms. Dasuni Ganepola — General Sir John Kotelawala Defence University

Helping people to relax while listing to music and reduce their stress level is a good thing that can rebuild their long-term relationships and rebuild damaged relationships. Recognizing emotions and reducing stress levels while listening to music influencing people to use new technology (smartphones and headsets). Many people already have the habits of listening to music while traveling, sleeping, studying, and working. Providing them the capacity of monitoring their emotions and stress level with no additional support or effort is required to make their life healthier. This emotion detecting application can be utilized to measure the stress levels and control them to a normal level by suggesting music as a therapy to the current level of stress of the user. degree of stress recognition by measuring the users’ ECG or EEG signal pattern. The main aim of this research is to give a fully emotion recognizable device that automatically prepares a playlist According to their stress level. While various form bio sensor devices have been investigated, including devices worn on the finger, neck, hand, and chest, most potential wearables currently need bulky and clumsy connections to external hardware parts (for power or data procurement). Owing to the cumbersome presence of cameras, batteries, and on-body components, multiple applications ranked poorly in terms of aesthetics and wearability in a recent study of 25 state-of-the-art wearable devices for health tracking. conduct a literature research in order to identify an appropriate biosensor device for measuring stress levels. Here stress level is allocated as the high level of stress and low level of stress and emotions are categorized under positive emotions and negative emotions for example under high and positive the emotions which can be detected are excited, happy, pleased. Here the created playlist allows the stressed users to recover from it, users with low-stress levels to come up with a normal person’s stress level and for a user with a normal stress level to keep up the healthy stress level. By the observed and analyzed ECG data we classify a person’s emotional state into 4 main categories according to stress levels they are high and positive, high and negative, low and positive, low and negative. According to the above categorized 4 types (high and positive, high and negative, low and positive, low and negative) database is created including the suitable music for the certain stress levels. Low-stress positive emotions are relaxed, peaceful, and calm. The average stress level of a healthy person is 3.5 matures, 4.3 boomers, 5.8 Xers, and 6.0 Millennials.

VIDEO PRESENTATION

A Study on the Development of Mental Stress Relieving Affective Application with aegis of Kansei Engineering Methodology — Author Profiles

Data-Driven Marking Automation

Janani Tharmaseelan — Sri Lanka Institute of Information Technology

Dr. Kalpani ManathungaSri Lanka Institute of Information Technology

Dr. Shyam Reyal Sri Lanka Institute of Information Technology

Dr. Dharshana KasthurirathnaSri Lanka Institute of Information Technology

Due to the growing popularity of Computer science and the IT sector, an increasing number of students have started enrolling in programming modules. Marking programming submissions is becoming increasingly tedious since marking an assignment requires a considerable amount of time and effort. Moreover, large student numbers increase the overhead of marking; therefore, evaluating such assessments needs to be automated. Programming assignments mainly contain algorithm implementations written in specific programming languages, where the questions will be assessing students’ logical thinking and problem-solving skills. Marking automation approaches fall under two main categories namely test case-driven approach and source code analyzing. Test-case-driven marking can be done only in programs that take inputs and produce outputs, and for such, the programs should be syntactically correct. Given that many marking rubrics/evaluation criteria provide partial marks for programs that are not syntactically correct, evaluators are required to analyze the source code during code evaluations. This process will add the additional burden of source code analysis that will require more time and effort. In past, there was research done using Natural language processing, Regular expression matching, machine learning, and simple neural network in automated marking. . Yet, such approaches are not general solutions, neither had been able to achieve higher efficiency or accuracy always. Hence, this specific research intends to optimize the marking effort using a novel approach with Graph Convolutional Neural Network So the research problem I am going to work on is developing a model, which will be checking the similarities between the source codes submitted by the students, and the lecturers’ answers and come up with a recommendation for the marks. The model will be based on GCNN. Our proposed approach will be novel since it is very little, or no literature found applying GCNN upon marking automation. In a data-driven approach we proposes, students’ submissions, collected over a period will be input to train a proposed model. Then using the learned model, new assignments will be categorized into the corresponding models. Such models would contain the assigned marking rubric to assess the assignments and provide feedback automatically. The proposed technology for the data-driven approach is to use Graph Convolutional Neural Network. This is one type of category which comes under Graph Neural Network. GCNN seems to be well suited for this task, since assignment and marking rubrics could be represented as graphs, and such graph data are used to learn representations of nodes based on the initial node vector and its neighbors in the graph. The proposed idea is, Student answers will be represented in a single graph, which contains keywords, variables, and calculations as nodes. Vector comparison will automate the student’s marks

VIDEO PRESENTATION

Data-Driven Marking Automation — Author Profiles

Simulating Microscopic Traffic with Diverse Vehicle Behaviour

B. Kiruthiharan — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Dr. Manjusri Wickramasinghe — University of Colombo School of Computing

Dr. Kasun Karunanayaka — University of Colombo School of Computing

Traffic Simulation is the process of recreating the flow of vehicles observed in the real world into a simulation environment. Even though many traffic simulation models have been introduced over the past, one of the major topics to be investigated in this field is simulating vehicle traffic with vehicles having diverse behaviors. Simulating vehicle traffic with diverse vehicle behaviors is necessary for developing more robust applications using traffic simulation models. Humans have diverse behaviors when driving in traffic. Hence it is essential to reflect this when simulating traffic. This research discusses the research and development of a microscopic traffic simulation model using Machine Learning techniques to simulate vehicles with diverse behaviors. The main objective of developing such a traffic simulation is to introduce human-like driving agents in traffic simulations. Previous works focus on either law of physics and mimic them to simulate vehicle traffic or use mathematical and rule-based models to simulate traffic. One of the main limitations with this is that they follow a set of rules. Hence it cannot model the unpredictable nature of humans in driving vehicles. Machine learning focuses on using data and algorithms to imitate how humans learn and improve from their experience. Driving is a skill where humans gradually learn and improve in their lifetime. Therefore, it is safe to say that we could develop vehicles with diverse behaviors using machine learning techniques and use them when simulating traffic. Therefore, Machine Learning techniques will be explored to generate diverse vehicle behavior when simulating traffic.

VIDEO PRESENTATION

Simulating Microscopic Traffic With Diverse Vehicle Behaviour — Author Profiles

Shopping Assistant: A Fashion Suggesting Intelligent System using Natural Language Processing (NLP): An Aspect Based Opinion Mining Approach

Keerthiga Rajenthiram — University of Colombo School of Computing

Dr. MGNAS Fernando — University of Colombo School of Computing

In the 21st century, the usage of Web 2.0 shows a vast increase in its growth with highly attracting more users each year by allowing them to carry out most of the activities online. After the impact of Covid19, most of the fashion cloth and textiles industries transferred from the physical to the digital world. When it comes specifically to clothing and accessories shopping, people always look for new styles, fashions, and brands in the market. However, it is challenging to find high-quality products which meet the exact needs in an online scenario. The customer reviews and ratings play a vital role in helping customers find quality items that best match the need. With the advancement of social media, opinionated information and reviews available on the web are vast. Therefore, it is hard to manually go through and compare every review, and it is a time and energy-consuming task. The lack of personalized suggestions given to the users depending on customer opinions, user preferences, and their factors makes them difficult to choose from various items. Getting to know about own customers and their behaviours is very important for a success of a business. Only having the customer details and their purchased details would not yield a clear insight. Without knowing the targeted groups of customers and the product’s strengths and weaknesses, the retailers cannot improve their sales. In addition, there is no option for the users to try on clothes and accessories in online scenarios before they make the purchase. The customers must take the body measurements manually and decide on a size that would fit their body. This paper presents a ‘Fashion Suggesting Intelligent System’ which addresses the current problems the customers and retailers facing in the online fashion retailing industry using Opinion Mining, Aspect-Based Sentiment Analysis, Personalized Suggestions and Augmented reality. The research proposes a hybrid approach that combines unsupervised and supervised learning to enhance sentiment analysis. The results from the sentiment analysis are used in providing personalized suggestions to customers and intelligent insight to merchants. The proposed system aims to assist customer and retailers while shopping and selling online by providing an intelligent system which can analyze customer’s opinions at aspect level and provide personalized suggestions, deep customer insight and targeted groups to of customers. In addition, the system combines augmented reality in a traditional online shopping context to virtually fit on clothes before making a purchase.

VIDEO PRESENTATION

Shopping Assistant: A Fashion Suggesting Intelligent System using Natural Language Processing(NLP): An Aspect Based Opinion Mining Approach — Author Profiles

A Study on Utilizing Electroencephalogram Signals to Recognize Emotions of a Player in Affective Gaming

KMRT Munasinghe — General Sir John Kotelawala Defence University

SHIDN De Silva — General Sir John Kotelawala Defence University

MWS Pramodha — General Sir John Kotelawala Defence University

DIN Dahanayaka — General Sir John Kotelawala Defence University

WGT Heshan — General Sir John Kotelawala Defence University

EDSD Dhanuge — General Sir John Kotelawala Defence University

VS Samarathunga — General Sir John Kotelawala Defence University

Dr. Pradeep Kalansooriya — General Sir John Kotelawala Defence University

Ms. Dasuni Ganepola — General Sir John Kotelawala Defence University

Normally in day-to-day life when playing video games, the players must get adapted to the game. Therefore, researchers expected to do a review on how to change this game strategy according to players wish and make it more user-friendly. Playing a game by controlling it with a mouse, keyboard or joystick gives much pleasure to the human brain. It reduces the stress of humans. However, this can be named as the traditional way of gaming. The way of gaming could be improved. With this, the concept of Affective Gaming can be introduced. Affective gaming refers to the new generation of games in which the players’ behavior directly affects the game. This study discusses a research phase of the development of an Affective computer game based on this concept. The design of the game was based on the “Affective Loop” suggested by Sundstrom. The emotional state and actions of a player can be recognized and the gaming system can change itself accordingly to offer an increased user experience. There are many ways to capture the real-time emotions for emotion recognition of the player such as physiological signals, behavior traits, facial expressions, speech emotion analysis. However, emotion capturing based on physiological signals are the cutting-edge approach for investigating an individual’s behavior and reaction when exposed to information that may elicit emotional responses via multimedia tools such as video games. This research project is mainly focusing on recognizing the emotions of the player by capturing the player’s Electroencephalogram (EEG) signals. EEG signal detection is one technology that can directly measure neuronal activity and is the most effective way to measure neurons. The main aim of this research project is to design a mechanism for an affective gaming system to capture and recognize the emotions of players utilizing EEG signals of the player. The researchers will work on how EEG signals can be detected & captured to recognize the real-time emotions of the player during the gameplay. Machine Learning and Deep Learning techniques are planned to be utilized to extract information related to the emotions of the player and automatically recognize emotions.

VIDEO PRESENTATION

A study on Impact of using Electroencephalogram to improve Affective Gaming — Author Profiles

Modeling the Optimum Braking Distance Scenarios in Sri Lankan Road and Weather Conditions

Ilthizam Imtiyas — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Dr. Kasun Karunanayake — University of Colombo School of Computing

Dr. Manjusri Wickramasinghe — University of Colombo School of Computing

Sri Lanka is experiencing rapid growth in vehicle accidents in recent times. Statistics issued by the Department of Police have shown that around eight people die each day due to road accidents. Road environment (i.e., road geometry, road surface conditions, animals, physical impediments), weather conditions, and drivers’ parameters (i.e., uncontrollable speed, impromptu stops, incompetent driving, overtaking, fatigue and rashness) are the primary factors for increasing vehicle accidents. Also, according to the previous studies, many vehicle accidents happen either when drivers brake too early or too late and therefore determining the optimum braking distance could significantly reduce vehicle accidents. With the advancement of human-computer interactions, simulated virtual environments can be used to train humans, and observe and gather human data based on different activities. Since this research studies the braking distance of drivers in different weather and road conditions, a simulated environment will be used for different road and weather environments to simulate braking distance scenarios. Human driver agents will be driven in the simulated environment. The data of the driver parameters, road and weather parameters will be continuously collected to train and build a machine learning model for optimal braking distance. Prior work has shown that various weather conditions such as rain, road surface conditions such as wet, slippery, and dry need to be considered when modeling and simulating such a braking distance scenario. As such, this research explores ways of modeling the correlation between the various types of road surfaces, the weather environments, and the braking distances of drivers in a simulated environment created on a Unity3D Game Engine. As a result, this research attempts to build a model to find the optimal braking distance by considering the weather conditions, the road surface, and the driver parameters such as fatigue, age, driving distance using machine learning techniques.

VIDEO PRESENTATION

Modeling the Optimum Braking Distance Scenarios in Sri Lankan Road and Weather Conditions — Author Profiles

BAMM: Bring Across My Music — an Intelligent Music Playlist Generator Based on Narrated Human Feelings

Thusithanjana Thilakarathna — Sri Lanka Institute of Information Technology

Dr. Shyam Reyal — Sri Lanka Institute of Information Technology

Dr. Nuwan Kodagoda — Sri Lanka Institute of Information Technology

Dr. Dharshana Kasthurirathna — Sri Lanka Institute of Information Technology

People listen to music all the time, intentionally or unintentionally. If you, do it intentionally, you have multiple options. You can be listening to a general medium like the radio, or it can be your personal preference with the use of the latest technology. In both cases, you can enjoy more if the song is relatable to what you feel [1, 2]. Daily music listeners tend to create a list of music tracks based on their preference [3]. This list is known as a playlist [4]. Bring Across My Music (BAMM) is a project which brings you an exceptional playlist that relates the most to what you feel. BAMM is different from other song recommendation tools which generate their playlist on past choices. The user simply can narrate their feelings and, based on that a playlist will popup or a playlist can be generated by analyzing the emotional context of a monolog. It will benefit the online music streaming services. Radio and TV stations can use this service on their musical programs to play best song based on the context.

Aims:

— Identify ways of processing narrated human feelings to generate a relatable music playlist.

— Classifications that can be introduced to the music that can optimize the tasks of finding relatable music.

— The method of analyzing the human reaction/ mood change.

— The emotional state of a person verbal narration.

— Create the optimum user interface that can retrieve human feelings and deliver the playlist.

Song dataset will be created including lyrics, genre, wave file, meta-data and classified based on various technologies including collaborative filtering, hidden Markov modal, n-gram, artificial grammar learning, finite state machines and, finite state transducers to find the optimum modal. A dataset of narrations will be prepared based on text and wave patterns. It will be analyzed to identify the emotional score of that narration. Based on the context of the narration and the calculated score a playlist will be generated based on the best classification modal. A mobile application will be created to evaluate the system. The application design will be also evaluated based on the HCI principle by the domain experts and the user feedback.

[1] Y. S. Chen, C. H. Cheng, D. R. Chen, and C. H. Lai, “A mood- and situation-based model for developing intuitive Pop music recommendation systems,” Expert Syst., vol. 33, no. 1, pp. 77–91, 2016, doi: 10.1111/exsy.12132.

[2]Y. Jin, N. Tintarev, and K. Verbert, “Effects of personal characteristics on music recommender systems with different levels of controllability,” RecSys 2018–12th ACM Conf. Recomm. Syst., pp. 13–21, 2018, doi: 10.1145/3240323.3240358

[3]D. Sánchez-Moreno, Y. Zheng, and M. N. Moreno-García, “Time-Aware Music Recommender Systems: Modeling the Evolution of Implicit User Preferences and User Listening Habits in A Collaborative Filtering Approach,” Appl. Sci., vol. 10, no. 15, p. 5324, 2020, doi: 10.3390/app10155324.

[4]A. OGINO and Y. UENOYAMA, “Impression-Based Music Playlist Generation Method for Placing the Listener in a Positive Mood,” Int. J. Affect. Eng., 2020, doi: 10.5057/ijae.ijae-d-19–00016.

VIDEO PRESENTATION

BAMM: Bring Across My Music — an intelligent Music Playlist Generator Based on Narrated Human Feelings — Author Profiles

Towards a Style for Push-Communication Enabled Rich Web-based Applications

Nalaka R. Dissanayake — Sri Lanka Institute of Information Technology

Dharshana Kasthurirathna — Sri Lanka Institute of Information Technology

Shantha Jayalal — University of Kelaniya

Rich web-based Applications — like google apps or Facebook — provide rich graphical user interfaces for the users to interact with these advanced web-based systems with a higher user experience, compared to the traditional web applications. The Rich Web-based Applications allow their desktop applications like rich graphical user interfaces to communicate faster with a server using a special communication model called Delta-Communication. There is a variety of technologies/techniques like AJAX, which can be used to implement Delta-Communication. The rich graphical user interfaces demand implementing many related features on a single web page using Delta-Communication, which upturns the complexity of the system. Push-Communication is used to improve the user experience more through features like notifications and real-time updates. Push-Communication enabled features further increases the complexity of the Rich Web-based Applications since the web is based on the request-response model and does not support push-communication by core. Therefore, additional elements are required to enable push-communication in the web environment, which should be developed using dedicated technologies/techniques such as polling, comet, or WebSocket. Delta-Communication techniques like comet exploit the request-response model to simulate push-communication where the technologies like WebSocket implements true-push-communication from the server to the client(s) with improved performance and scalability in the direction of enriching the user experience. An architectural style can help in reducing the complexity by assisting in realizing the formalism of the push enabled Rich Web-based Applications. A style can show the run-time configuration of the elements, required to implement all the aspects of these complex systems, including push enabled features. Available architectural solutions are mostly based on some specific technologies/techniques and realize only push-simulation, hence, poorly address performance and scalability, and unable to improve the user experience. Our ongoing research proposes to understand the common characteristics and essential features for push enabled Rich Web-based Applications towards extending the style named the RiWAArch style — which had been introduced for the general Rich web-based Applications, in the previous stage of the ongoing research — to realize how the push-communication can be integrated. The resulting style is expected to be an abstract style, which can realize the integration of true-push-communication into the Rich Web-based Applications in the form of Delta-Communication. The ultimate style should assist in developing elements to perform true-push-Delta-Communication using advanced WebSocket technology to improve the performance and scalability of the system, in the direction of increasing the user experience.

VIDEO PRESENTATION

Towards a Style for Push-Communication Enabled Rich Web-based Applications — Author Profiles

Online Proctoring For Mass Examinations With Optimized Resource Usage

C. G. R. Fernando — University of Colombo School of Computing

S. N. E. Kodituwakku — University of Colombo School of Computing

K. M. D. M. K. Gamhatha — University of Colombo School of Computing

B. N. B. Perera — University of Colombo School of Computing

R. D. Perera — University of Colombo School of Computing

Dr. T. C. Halloluwa — University of Colombo School of Computing

Mr. M. A. P. P. Marasinghe — University of Colombo School of Computing

Sri Lanka’s current education system is moving onto digital modes. Hence the demand for online examinations is surging. Conducting online examinations with video conferencing tools and/or web browsers have several critical issues. The main issue being the bandwidth issue. Most students and proctors do not have the proper bandwidth to send/receive two to three streams when using video conferencing tools. Most of these tailor-made tools used for exam proctoring and monitoring are expensive and come with many limitations. By running two or three applications concurrently, students having devices with low specifications face many issues. Hosting mass examinations online is another critical issue in today’s online examination context. There is a need for a considerable amount of proctors for monitoring students in this mode. For institutions where the number of proctors is limited, this will be a major challenge when it comes to monitoring multiple students simultaneously. Besides all these issues, preventing exam offences is another critical issue faced by proctors and exam coordinators when hosting examinations online. There are so many ways that a student can cheat during the exam such as referring to e-learning material, getting external help, using a virtual machine, and using a secondary screen. The proposed solution is a single application that improves the quality of the distance examinations by reducing the aforementioned bandwidth issues and introducing many features that will help to conduct mass exams reliably and efficiently. The examinee’s front-view, side-view, screen, current process-list of the device, device specification, device clipboard, and background sound, will be used to analyze the examinee’s behavior. For the front view, the examinee’s front camera will be used. The side-view will be taken from another mobile device which will be placed on a required proper angle. The real-time process-list, clipboard, and device specification will be taken using a desktop application. Even though the system takes all that data, only side-camera footage will be streamed to a proctor. The front-camera feed will be processed through a machine learning model and will be analyzed whether there is any other person near the examinee. An audio feed will be fetched into another ML model that can also detect suspicious activity sounds. If the models detect any unnatural behavior or exam offence, the system will send an alert to the proctor with snapshots regarding that behavior. However, all audio and video streams will be encrypted, hashed, and stored in the student computer as evidence. Only the side camera footage will be streamed. Therefore, the proposed solution will reduce the bandwidth issue drastically. Students and proctors can communicate via an inbuilt chat interface if needed. Since the proctor only needs to monitor one video stream per student, mass examinations along with online proctoring can be conducted using minimum resources by using the proposed solution.

VIDEO PRESENTATION

Online Proctoring For Mass Examinations With Optimized Resource Usage — Author Profiles

Reduce patient mortality: Applicability of Cognitive and Usability engineering Methods for Patient Safety in Health Information Systems

Jagath Wickramarathne — Sri Lanka Institute of Information Technology

Dr. Nuwan Kodagoda — Sri Lanka Institute of Information Technology

Dr. Shyam Reyal — Sri Lanka Institute of Information Technology

Manual (non-computerized) paper-based medical record systems are still used in the health sector. It is less productive, and the accuracy is deepened on the operator. It also caused accessibility, data security, durability and data fragmentation issues. As a solution, electronic health information systems (HIS) were introduced. It collects data from the health sector and other relevant sectors, analyses and ensures their overall quality, relevance and timeliness and converts data into information for health-related decision-making [1]. A new type of error in healthcare arises from the use of technology was identified as technology-induced errors, defined as an “error that inadvertently occurs as a result of using a technology”[3] arise from: the design and development of technology, the implementation and customization of technology, the interactions between the operation of technology and the new work processes that arise from a technology’s use” [4]. However, technology-induced errors are a type of unintended consequence that may lead to patient harm, disability, or death. Due to the potential of technology-induced errors to harm, they must be reduced [5] to implement Patient safety, the reduction of the risk of unnecessary harm associated with healthcare to an acceptable minimum [2]. According to the literature, there were different approaches used to identify and minimize technology-induced errors namely software engineering approaches, organizational behavior approaches, human factors approach, multi theory, model, and frameworks [6]. Researchers have found that 80% of usability problems are associated with technology-induced errors [7] tried their best to integrate cognitive science and usability engineering methods for the evaluation of clinical information systems [8] but not fully integrated into those approaches. There were identified frameworks under the Software Engineering Approaches from the previous researches [6]. This research aims to explore those existing frameworks and investigate how to integrate cognitive science and usability engineering methods to identify and minimize technology-induced errors. Cognitive science is the interdisciplinary study of mind and intelligence, philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Cognitive task analysis (CTA) is a widely used task analysis technique in cognitive science and CTA is often only the strategies used to describe the knowledge required for performance. Variety of CTA techniques such as Applied Cognitive Task Analysis (ACTA), Critical Decision Method (CDM), Skill-Based CTA Framework, Task-Knowledge Structures (TKS) and the Cognitive Function Model (CFM) used to elicit knowledge from experts and other sources [9]. This research aims to investigate and select the most suitable Cognitive model to elicit knowledge from experts in the HIS design. Usability can be broadly defined as the capacity of a system to allow users to carry out their tasks Safely, Effectively, Efficiently, and Enjoyably [8]. Popular usability techniques are [10][11] Heuristic Evaluation (HE), Cognitive Walkthrough (CW), Action Analysis (AA), Thinking Aloud (THA), Field Observation (FO), Questionnaires (Q) and the most suitable usability engineering method(s) will be select for HIS design and testing. Once this selection has been completed, we will be investigating how to integrate cognitive science and usability engineering methods into an existing framework(s). The qualitative research approach will be used to select the cognitive model and usability methods. The proposed new framework will be tested quantitatively.

[1] F. Kitsios, M. Kamariotou, V. Manthou, and A. Batsara, “Hospital Information Systems: Measuring End-User Satisfaction,” Lect. Notes Bus. Inf. Process., vol. 402, no. June, pp. 463–479, 2020, doi: 10.1007/978–3–030–63396–7_31.

[2] WHO, “Conceptual Framework for the International Classification for Patient Safety,” 2009. [Online]. Available: https://www.who.int/patientsafety/taxonomy/icps_full_report.pdf.

[3] P. Bellwood, “Qualitative Study of Technology-Induced Errors in Healthcare Organizations,” University of Victoria, 2013.

[4] E. M. Borycki, “Technology-induced errors: where do they come from and what can we do about them?,” Stud. Health Technol. Inform., vol. 194, pp. 20–26, 2013, doi: 10.3233/978–1–61499–293–6–20.

[5] E. Borycki et al., “Methods for Addressing Technology-induced Errors: The Current State,” Yearb. Med. Inform., no. 1, pp. 30–40, 2016, doi: 10.15265/iy-2016–029.

[6] M. M. Yusof, A. Papazafeiropoulou, R. J. Paul, and L. K. Stergioulas, “Investigating evaluation frameworks for health information systems,” Int. J. Med. Inform., vol. 77, no. 6, pp. 377–385, 2008, doi: 10.1016/j.ijmedinf.2007.08.004.

[7] A. W. Kushniruk, M. M. Triola, E. M. Borycki, B. Stein, and J. L. Kannry, “Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application.,” Int. J. Med. Inform., vol. 74, no. 7–8, pp. 519–526, Aug. 2005, doi: 10.1016/j.ijmedinf.2005.01.003.

[8] A. W. Kushniruk and V. L. Patel, “Cognitive and usability engineering methods for the evaluation of clinical information systems,” vol. 37, pp. 56–76, 2004, doi: 10.1016/j.jbi.2004.01.003.

[9] R. Clark, D. Feldon, J. J. G. Van Merrienboer, K. Yates, and S. Early, “Cognitive Task Analysis,” Handb. Res. Educ. Commun. Technol., no. February 2016, pp. 577–593, 2016, [Online]. Available: https://www.researchgate.net/publication/294699964_Cognitive_task_analysis.

[10] C. Paton, A. W. Kushniruk, E. M. Borycki, M. English, and J. Warren, “Improving the Usability and Safety of Digital Health Systems: The Role of Predictive Human-Computer Interaction Modeling,” J. Med. Internet Res., vol. 23, no. 5, p. e25281, 2021, doi: 10.2196/25281.

[11] J. Nielsen, “Usability inspection methods,” Conf. Hum. Factors Comput. Syst. — Proc., vol. 1994-April, pp. 413–414, 1994, doi: 10.1145/259963.260531.

VIDEO PRESENTATION

Reduce patient mortality: Applicability of Cognitive and Usability engineering methods for Patient Safety in Health Information Systems — Author Profiles

Effective identification of Nitrogen Fertilizer demand for Paddy Cultivation using UAVs

Rusiri Illesinghe — University of Colombo School of Computing

Shayan Malinda — University of Colombo School of Computing

Anupama Karunarathna — University of Colombo School of Computing

Heshan Kavinda — University of Colombo School of Computing

Dr. Kasun Karunanayake — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Mr. Upul Rathnayake — University of Colombo School of Computing

The aim of the work is to identify the malnourished areas of soil in a paddy field with lower and excessive nitrogen levels, using UAV footage and machine learning approaches. The purpose is to make it easier for the farmers in Sri Lanka to provide a sufficient level of nitrogen fertilizer to the required areas in the paddy fields and increase their yields. Utilization of yields applying inorganic fertilizers is a common agricultural practice in paddy production and Nitrogen fertilizers are one of the most commonly used varieties among them. The rate of nitrogen consumption plays an important role in the process of growing paddy plants, resulting in negative consequences at deficient and excessive conditions.

Nitrogen deficiency causes decreased Chlorophyll content, Photosynthetic rate, and increased leaf reflectance of plants and at the most severe stages, it may yellowish the entire plant, directly affecting the food production process. Excessive Nitrogen causes “luxuriant” growth, resulting in the plant being attractive to insects, diseases, and pathogens. Additionally, it causes excessive growth and weakens the stems of the plant, lowering the farm profits and bringing negative consequences for the environment such as water and soil acidification, pollution of groundwater-surface and other water mineral resources, and accelerated ozone depletion. Therefore, the effective usage of Nitrogen fertilizer for paddy cultivation is essential. According to Rice Research and Development Institute — Bathalagoda, 20kg/ha (28%) of Nitrogen can be saved by applying fertilizer referring to the LCC(Leaf Color Chart).

The main problems identified is that, although the Sri Lankan Agriculture Department suggests LCC as a remedy to determine the demand for Nitrogen in paddy crops, the practical application of LCC still has several limitations including, incorrect visual reading of colors identified through the naked eyes, incorrect identification using faded color charts, inconveniences of taking readings from hard to reach and middle areas in paddy fields and for larger paddy fields, the inspection process may take several man-hours. Due to these barriers, the farmers have been tempted to apply some average amount of Nitrogen Fertilizers to the paddy fields by looking at a particular area or entirely using their previous experiences, which often causes improper provision of Nitrogen.

This ongoing work is intended to provide a Nitrogen Level identification mechanism for paddy fields using a machine learning approach along with image processing techniques, where the images are captured using a UAV that flies above the paddy fields. This is a time-saving and safe approach as the Nitrogen requirement of a wider area can be detected accurately at one instance of time, conveniently capturing the images of man-unreachable areas in paddy fields without damaging the plants.

The proposed methodologies suggest analyzing the colors of the paddy plant areas and the main focus is to find a better approach to optimize the captured images and explore a more effective machine learning approach to predict nitrogen level after the feature extraction process.

VIDEO PRESENTATION

Effective identification of Nitrogen Fertilizer demand for Paddy Cultivation using UAVs — Author Profiles

Fuzzy Logic Controller Based Automated Drip Irrigation System Using Field Capacity Measurements

Senthuja Karunanithy- Sri Lanka Institute of Information Technology

S.M.B. Harshanth- Sri Lanka Institute of Information Technology

At present and more so in the future, irrigated agriculture will take the place of the scarcity of groundwater. Difficulty to provide adequate water supply than expected for irrigation will be the norm. The irrigation management, “production per unit” will shift towards emphasizing maximizing the production per unit of water consumed, the “water productivity”. Hence, to find an optimum point of irrigation with the consideration of the quantity of water application, growth, and yield of a plant a test was conducted. This smart irrigation system optimizes water usage for agriculture and also improves agricultural water resources utilization, crop’s automatic, location, time, and appropriate drip irrigation is a good choice. In this study, an automatic control drip irrigation system based on a wireless sensor network and fuzzy control would be introduced. This system uses soil moisture, temperature, humidity, light, pH value, and wind information and sends the drip irrigation instructions via a wireless network. It puts the above six soil factors into the input fuzzy controller, creates a fuzzy control rule base, and finishes crop irrigation time through the fuzzy control. The Humidity sensor’s data and CNN model helps to predict the rain and harvest the rainwater which helps the agriculture purpose. Weeds are the plants which grow in the wrong place in agricultural land. Focusing on detecting the weeds in the crop using convolutional neural networks and Image processing, then notify the user. Fix the physical system to agricultural land and compare the productivity of the particular plant with the past data.

Keywords — Drip Irrigation, Fuzzy logic, Arduino and Field capacity

VIDEO PRESENTATION

Fuzzy Logic Controller Based Automated Drip Irrigation System Using Field Capacity Measurements— Author Profiles

Self-Navigating Smart Wheelchair

CSW Rajapaksha — General Sir John Kotelawala Defence University

WAAT Perera — General Sir John Kotelawala Defence University

PADA Seneviratne — General Sir John Kotelawala Defence University

KATD Rajapaksha — General Sir John Kotelawala Defence University

SHCKD Silva — General Sir John Kotelawala Defence University

VR Weerasekara — General Sir John Kotelawala Defence University

WG Kalupahana — General Sir John Kotelawala Defence University

Dr. Pradeep Kalansooriya — General Sir John Kotelawala Defence University

Today’s world, lots of people try to help disabled people and feel them, there is no difference in them with normal people. Every year lots of new ideas and innovations come to the world that helps them to live a better life. Wheelchairs are one main equipment that disabled people use. Developing a wheelchair that has capabilities such as environment mapping and self-navigation helps a disabled person to travel between pre-mapped locations easier. We have planned to develop a system which can map and detect obstacles using computer vision and use such data to calculate the optimal path between two locations. With a system of motorized wheels, the wheelchair will have the complete in dependability and will not be relied on somebody else’s handle. This paper discusses the design of a wheelchair which is capable of understanding the surroundings and act accordingly to transport the user.

Keywords — Self-navigation, Environment mapping, Obstacle avoidance

VIDEO PRESENTATION

Self-Navigating Smart Wheelchair — Author Profiles

Smart Wallet Tracking System

TLN Nadeeshana — General Sir John Kotelawala Defence University

HWKN De Silva — General Sir John Kotelawala Defence University

JD Motha — General Sir John Kotelawala Defence University

AHBKY Gawesha — General Sir John Kotelawala Defence University

CS Barro — General Sir John Kotelawala Defence University

MTD Mackonal — General Sir John Kotelawala Defence University

Dr. Pradeep Kalansooriya — General Sir John Kotelawala Defence University

Wallet is an essential item for everyone as it helps to hold very valuable properties. According to several number of surveys, the wallet is one of the most belongings lost, the top five properties misplaced, and even stolen. Because of this problem, there arises a need to find a way to track a misplaced wallet. To implement this smart wallet, GPS will be installed on the wallet for the real-time location update of the wallet at all times. In this context, we have planned to develop a mobile application to track and locate the wallet. We have the ability to view and track the wallet throughout the mobile application. The proposed wallet enables real-time tracking and monitoring of the wallet location at a specific range. As future development, further optimization can be done to make the wallet very portable and handier.

KeywordsGPS, Android, GUI, Arduino, Internet of Things

VIDEO PRESENTATION

Smart Wallet Tracking System — Author Profiles

Emotional Amplification in Affective Gaming through Music Emotion Recognition

Dasuni Ganepola — General Sir John Kotelawala Defence University

Dr. Pradeep Kalansooriya — General Sir John Kotelawala Defence University

Does Music have a potential for emotional Amplification?

Is music all about emotion-provoking? Well to a certain level, we can do accept this fact. Psychologists have proven that music do affects the brain regions associated with emotions.
So how will this psychological activity benefit researchers like us in the HCI domain?
Well, since music has the potential for emotion stimulation & amplification, music can be considered as a technology creating affect in the research area of Affective Computing. Affect is all about your present emotional state. Affective Computing is the latest research area brought up due to the emergence of Emotion Artificial Intelligence and other affective technologies. Affective Computing will create a new way of living with our digital devices (even the ordinary ones!).
A branch of Affective Computing is Affective gaming It is the latest digital game technology where the players’ real-time emotional state exerts a direct influence on the gameplay to enhance user interactivity. The concept of Affective Gaming is implemented via an affective loop where the gaming system detects real-time emotional states of a player via his biological signals and the system will respond in a manner to increase the emotional interactivity of the player. The system response can be in the approaches of either altering the game plot/ objectives or emotional amplification. The latter approach is yet to be deeply explored and exploited in the Affective Gaming research community.
I and my research partners are working on a project to increase user interactivity in games by emotional amplification game music. It has been proved scientifically that game music is an emotional amplifier in terms of cortical arousal. Hence, a hypothesis was made indicating that game players’ levels of cortical arousal can be amplified from game music through an affective feedback loop, to increase the user interactivity of the game.
To identify and map the relationship between game music and the stimulating cortical arousal levels of players, a technology called Music Emotion Recognition (MER) is being availed in this project. MER will automate the process of human emotion perception happening at the auditory cortex of the human brain. MER is implemented through Machine Learning (ML) and Deep Learning (DL).
Initial research works have been commenced in this project where MER was applied to game music genres “Rock & Electronic music” ML model was trained to identify relationships & classify music based on cortisol arousal levels in the form of emotion labels. The average precision score of classification resulted in 70.82%.
Our publication is now available on IEEE Xplore. You can check it out for more details on our project: https://ieeexplore.ieee.org/abstract/document/9313028

VIDEO PRESENTATION

Emotional Amplification in Affective Gaming through Music Emotion Recognition — Author Profiles

A Framework for Online Exam Proctoring in Resource-Constrained Settings Focusing on Preserving Academic Integrity

Dilky Felsinger — University of Colombo School of Computing

Dr. Thilina Halloluwa — University of Colombo School of Computing

Ishani Fonseka — University of Colombo School of Computing

Global adoption of Online Learning has increased rapidly during recent years due to its technological enhancements. Online Learning has supported many institutions to provide uninterrupted education during unprecedented circumstances like global health emergencies. Access to quality learning material from world-renowned universities, the ability to learn at one’s own pace, and even getting the ability to earn college credits by engaging in Massive Open Online Courses (MOOC) are some of the benefits of Online Learning. Honesty and integrity are the foundations of all academic programs. However, though Online Learning has seen a sudden surge of usage during the Coronavirus pandemic, it faces issues maintaining academic integrity during online evaluations.

To this end, many universities have adopted remote exam proctoring through video-conferencing software, which requires a substantial count of human proctors. In some distance learning programs where thousands of students take an online test simultaneously, manual proctoring using video-conferencing software has become a tedious task. Many automated online exams proctoring systems designed to overcome the capacity constraints of manual online exam proctoring systems are available in the market. The government-funded institutes in Sri Lanka cannot utilise them because they are costly and often do not suit the low-resource environments.

Therefore, the purpose of this study is to explore how to conduct online exam proctoring effectively under resource constraints. The study aims to apply machine learning techniques to identify ways to minimise computational costs and network data consumption in automated online exam proctoring scenarios. Since we do not have a readily accessible dataset to capture the possible acts of misconduct we will be creating a dataset by manually proctoring an online exam using video conferencing. We will use the created data to identify an online proctoring mechanism that performs well under resource constraints.

Finally, the study will evaluate the effectiveness of the proposed method in detecting academic misconduct in low-resource environments.

VIDEO PRESENTATION

A Framework for Online Exam Proctoring in Resource-Constrained Settings Focusing on Preserving Academic Integrity — Author Profiles

Employment Recommendation and Resume Shortlisting Portal Based on Machine Learning for the Information Technology Industry in Sri Lanka

Thathsarana Weerakoon — University of Colombo School of Computing

Isurika Perera — University of Colombo School of Computing

Pramodya Pathirana — University of Colombo School of Computing

Yeshan Gunawardana — University of Colombo School of Computing

Udara Weerasinghe — University of Colombo School of Computing

Dr. T. C. Halloluwa — University of Colombo School of Computing

Mr. M. A. P. P. Marasinghe — University of Colombo School of Computing

As per the status quo of Sri Lanka, the IT industry has been developed to one of the prominent in Sri Lanka. Many of the graduates coming to the IT industry face difficulty in finding employment opportunities that match perfectly with their portfolio, while considering many factors such as field of study, experience, projects, technology stack, skills, job preference, etc. But due to the prevalent diversity, many job seekers are not aware of the companies in the industry and the available opportunities. Thus, many of the job seekers opt into trial and error, eventually ending up in workplaces they are not content with. Furthermore, it can be observed that the companies receive many resumes for a particular job opportunity which makes the resume profiling and shortlisting process time-consuming. Due to the fact that the candidate has no way of tracking the status of his/her application, many candidates might be applying for other jobs in order to get a job as they cannot keep waiting too long without a job. This creates a possibility that he might get selected for a job that he has applied for while waiting for a response from the company that perfectly matches his profile. Coupled with the diversity in the industry, many candidates do not end up in a job that reaches their full potential.

The proposed system focuses on addressing the aforementioned problems by implementing a job recommendation and resume shortlisting portal using machine learning algorithms. It features a recommendation engine whereby the job seekers will be recommended job opportunities that match with their portfolio, as well as employers will be recommended job seekers that match with the employer’s portfolio. Moreover, the employers will be able to make use of the resume shortlisting functionality, whereby they can shortlist the received resumes to a preferred number, based on a preferred set of keywords. In addition to that, the system would facilitate the essential features of an online job portal, LinkedIn integration, analytics, and notifications. This proposed system would help the job seekers to find a matching job as well as the employers in their recruitment process, eventually making the job application and profiling process efficient and effective.

VIDEO PRESENTATION

Employment Recommendation and Resume Shortlisting Portal Based on Machine Learning for the Information Technology Industry in Sri Lanka — Author Profiles

Emotion Based Conditional Gaming Interface Design Using Kansei Engineering

CS Wanigasooriya — General Sir John Kotelawala Defence University

AMRNVB Pethiyagoda — General Sir John Kotelawala Defence University

NS Madushanka — General Sir John Kotelawala Defence University

MFAR Fernando — General Sir John Kotelawala Defence University

HMVS Herath — General Sir John Kotelawala Defence University

RMNM Rajapaksha — General Sir John Kotelawala Defence University

WDDA Gunawardhana — General Sir John Kotelawala Defence University

Dr. L.P. Kalansooriya — General Sir John Kotelawala Defence University

Computer games are a popular source of entertainment. The study of a player’s brain activity while playing a game is both an experimental contribution to the central nervous system neurophysiology and support for marketing research. Electromagnetic waves emitted by the brain are detected by devices. For example, psychologists can utilize EEG (Electroencephalography) to evaluate the influence of games on players. Nevertheless, to introduce an innovative medium for players to interact with the gaming system, and to intensify the “gamer’s experience”, analysis of EEG signals can be used for altering in-game variables which affect the overall gameplay on the go. Miniscule variables such as skybox alterations, sound effect’s tempo, ambient lighting, and prime variables in a game including the storyline, weather conditions, and overall game look and feel, can be improvised according to the emotions of the player detected from an EEG headset. Electrical signals are used by your brain cells to interact with one another. Neurons communicate by sending out small electrical signals, which resemble waves that go up and down in strength: these are your brain waves. EEG is a technique for measuring brain waves from a procedure in which electrodes, or miniature detectors, are placed on a person’s scalp using a cap or a headset. Normally, a cap holds all these electrodes, however portable devices with fewer electrodes, in fancier-looking headsets, have lately been introduced. Ultimately, the above-mentioned mechanisms would result in an emotion-based conditional gaming experience. The capacity to investigate how the brain works in real contexts, such as games, is one of the many benefits. In this paper it has been mainly focused on how human emotions can be used for utilizing the gaming experience by tweaking the overall game interface itself, using Kansei Engineering principles.

VIDEO PRESENTATION

Emotion Based Conditional Gaming Interface Design Using Kansei Engineering — Author Profiles

Acknowledgments

Our sincere thanks to the HCI interest volunteer teams behind this program. We would especially acknowledge :

Yashithi Dharmawimala, University of Colombo School of Computing

Ishara Wijekoon, University of Colombo School of Computing

Thisari Gunawardena, University of Colombo School of Computing

Manushi Jayawardena, University of Colombo School of Computing

Sandul Renuja, University of Colombo School of Computing

--

--