MLab students have an opportunity to work on a variety of activities, including traditional research (conference papers), coding projects for contests, goodwill projects, grants, and software products. Our past work (since September of 2019 has been the result of different undergraduate teams, each has accomplished amazing and ambitious goals! We are looking forward to new ideas, projects, and research, if you have an idea, or just want to learn how to work with Python and artificial intelligence, contact us!
Grants
’19 -Tech fee grant
’21 – Tech fee grant
’22 – Collaborative Collision Winner ($25,000)
’22 – Tech fee grant
’23 – Tech Fee application (pending)
Research in Development
The current research activities are in progress.
Learning with an AI companion
Usability study that seeks to understand how people interact with Artificial Intelligence, give a series of questions and a prompt playbook to guide their work.
OSRAI (Ocean Search and Rescue)
This project is an extension of Shark-Finder, that includes 10+ more classes including people paired with a variety of flotation devices, small craft and alone. The training data will be generated, and the system tested using a fixed wing aircraft equipped with a camera and an onboard computer to manage avionics.
Performance comparison study, synthetic media versus authentic training images
In this study, YOLO performance data drawn from a paper authored by —– will be replicated, instead using synthetic media to train YOLO, in order to compare and contrast development and performance issues when using synthetic training media. The original study used 100% authentic imagery of people in water, and used an aerial platform for a camera, broadcasting video back to a laptop equipped with a computer vision AI named YOLO. The goal of this study is to use synthetic media from the MLab pipeline to evaluate the efficiency of synthetic media.
Performance comparison study, Vision Transformers vs DNN computer vision
The MLab will run basic tests using the same dataset to train a ViT (Vision transformer) and a DNN (YOLO8) to learn what advantages and disadvantages exist, and the quality of work-arounds to satisfy benchmarks.
How universities commercialize research
This research study team is examining how universities commercial projects, using Shark-Finder, a project developed in the MLab. The work includes interviews, how projects are marketed, and how projects are commercialized.
Published Peer-reviewed Conference Papers
Padilla-Rodríguez, B.C., Adams, J. (2023) Acceptance of Online Degrees by Undergraduate Mexican Students: Comparing perceptions a decade later. Presented June 12 at The Association for the Advancement of Computing in Education (AACE), Vienna, Austria.
Adams, J.L., Ravuri, B., Moja, O., Obermaier, L., Roberts, A. (2023) Uses of Artificial Intelligence in Higher Education. Presented June 12 at The Association for the Advancement of Computing in Education (AACE), Vienna, Austria.
Sutor J., Adams, J. L. (2022) Exploring Synthetic Visual Data for Training Deep-Learning Based Classifiers. White Paper.
Adams, Jonathan, John Sutor, Ava Dodd, and Erin Murphy. (2021) Evaluating the Performance of Synthetic Visual Data for Real-Time Object Detection. The 6th International Conference on Communication, Image and Signal Processing (CCISP 2021) was held in Chengdu, China, Nov. 19-21, 2021.
Dodd, A., & Adams, J. (2021) The Role of Synthetic Data in Aerial Object Detection. Paper to be presented at International Marine, Aviation, Transport, Logistics and Trade, CMATLT001 2021: XV. In held in Amsterdam, Netherlands. (TOP PAPER AWARD)
Adams, Jonathan, Erin Murphy, John Sutor, & Ava Dodd. (2021). Assessing the Quality and Production of Synthetic Visual Data. In 9th International Conference on Information and Education Technology (ICIET 2021), Okayama, Japan, March 27-29, 2021 (5 pages). co-sponsored by IEEE, Okayama University (Japan), South China Normal University (China), and the International Academy of Computing Technology (Hong Kong).
Adams, Jonathan, & Mitchell, Anita. Lee. (2020). TESA: A pedagogical approach to engage, study, and activate technology learning in an interdisciplinary setting. In Association for the Advancement of Computing in Education (AACE) (Ed.), Proceedings of EdMedia + Innovate Learning The Netherlands (pp. 778-781). Waynesville, NC: Association for the Advancement of Computing in Education (AACE). Retrieved from https://www.learntechlib.org/primary/p/217389/.
Adams, Jonathan, Ava Dodd, John Sutor, & Erin Murphy. (2020). AI and Undergraduate Research: A Dialog in Project-Based Learning. In Gary H. Marks, & Denise Schmidt-Crawford (Eds.), Society for Information Technology & Teacher Education International Conference, Apr 07, 2020, in Online ISBN 978-1-939797-48-3. Association for the Advancement of Computing in Education (AACE), Chesapeake, VA. Retrieved from http://www.learntechlib.org/fromc/56493
Honor in Majors
Erin Murphy (Computer Engineering 2023) Exploring Encryption with Deep Steganography
Committee: Dr. Jonathan Adams, Dr. Olugbenga ‘Moses’ Anubi, Dr. Sastry Pamidi
2023 Kendall Smith (Computational Biology, 2024)
2021 John Sutor (computer Science ’22) An analysis of Super-Resolution fine-tuning for image generation.
Many computer vision tasks, especially image classification-based tasks, require ample amounts of data in order to achieve acceptable classification accuracy results. However, for some domains, it can be very difficult or impossible to obtain a large enough amount of data to train a classification model. Other classification tasks are further hindered by the issue of class imbalance. This research explores a quicker means of generating synthetic data to aid in computer vision classification tasks. Using two forms of convolutional networks, namely the Projected GAN and the Residual Dense Network, this paper aims to decrease the time to generate synthetic data for computer vision practitioners.
2020 Ava Dodd, (computer Science ’21) Analyzing the Effectiveness of Synthetic Data in Aerial Object Detection.
This study explores an end-to-end process for generating synthetic imagery, training YOLOv3, and performing real-time aerial detection. This study extends previous research to identify sea turtles using techniques for generating synthetic images suitable for training artificial intelligence models. The current study uses synthetic images generated by merging a 3D shark model with authentic background and applying effects to simulate movement and environmental properties. A Blender/Python interface was used to generate thousands of images with various camera and lighting positions. The trained model was evaluated using video footage captured over water from an aerial platform.
Software Projects
MLab software projects focus less on writing a conference paper, and more on hacking, or software coding and explorations. Many of the projects we work on use Jupyter Notebooks, or pages that include code that is already written, and can be run in a cloud service. Working with these files allows access to Artificial Intelligence agents, it’s easy to run the code and learn how the algorithms work. Working with code in this way is a great way to learn and contribute to projects.
MLab has submitted the following projects for university commercialization
– SharkFinder
– Binarual Beat Box
– Ocean Search and Rescue AI (OSRAI) – (pending)
The MLab has spawned a commercial enterprise (discontinued)
Syntheta.ai.
Turtle Finder
Turtle finder was initiated by Erin Murphy. More…
Binaural Beat Box
An application that uses AI Media to provide palliative care. More…
Shark Finder
Shark Finder uses YOLO to analyze a camera feed from a Phantom III Quadcopter. More…
Unpublished papers
Sutor, J., Adams, J. (2022) Exploring Synthetic Visual Data for Training Deep-Learning Based Classifiers.
MLB faculty Use of audio production for medical applications — Using Magenta (Audio AI) to generate synthetic music that includes binaural beats. This investigation has been funded, as part of an award earned by winning the Collaborative Collision with Deep Care, a larger group of faculty investigating the use of AI in health settings.
MLab team Autonomous racing drones We used python DJI libraries to train a Tello Drone to fly through obstacles.
MLab team — we used a CNN to develop an AI Stethoscope. That computer vision model is trained to look at spectrographic data and determine whether the heart is normal or has a defect. The algorithm was trained on 10 different heart anomalies and classified the heartbeats with an accuracy of 88%. A different algorithm will perform better, which will be needed to classify the defect.
John and MLab team This project sought to use techniques to search latent space representations for accuracy. The project uses several mathematical techniques to search, and identify the best image reproductions before they are upsampled, saving a lot of time when using AI to generate images.
MLab team – This research project sought to define the Ethical use of synthetic media, an attempt to contextualize synthetic media in terms o the beneficial and not beneficial uses of synthetic media, and the algorithms.