Fall 2024 Seminars

Speaker: Sarika Khushalani Solanki

Date: August 26, 2024

Time: 5:00 PM - 6:00 PM

Place: ESB G102

Abstract: Introduce the graduate seminar series and welcome students.

Speaker Bio: Biography: Sarika Khushalani Solanki received B.E. and M.E. degrees from India in 1998 and 2000 respectively. She received Ph.D. in Electrical and Computer Engineering from Mississippi State University, USA in 2006. She is currently an Associate Professor in Lane Department of Computer Science and Electrical Engineering at West Virginia University, Morgantown, WV, since August 2009. Prior to that, she worked for Open Systems International Inc, Minneapolis, MN as a Senior Engineer for three years. She has served as reviewer in National Science Foundation and Department of Energy and is past president of IEEE Distribution Systems Analysis Subcommittee and IEEE Career Promotion and Workforce Development Subcommittee and is editor of Transactions in Smart Grid. She is a recipient of Honda Fellowship award and NSF Career Award. Her research interests are Smart Grid, Power Distribution System, computer applications in power system analysis and power system control.

Speaker: Martin Dunlap

Date: September 9th, 2024

Time: 5:00 PM - 6:00 PM

Place: AER 135

Abstract: He will introduce the services and resources available through the WVU Libraries. These library resources may be critical to your graduate research.

Speaker Bio: He joined WVU in 1998 and has spent 10+ years working in the swamps of Florida as an environmental consultant. Since then he has worked in libraries first in Cleveland, Ohio and then here at WVU in various capacities. He recently got promoted to be the Engineering Librarian at WVU.

Speaker: WVU IT

Date: NA

Time: NA

Place: At your desk

Abstract: There is an online plagiarism tutorial at https://wvu.qualtrics.com/jfe/form/SV_6W3rGjsAaEenYgd

Here are the steps:
View videos.
Take a self-test.
Repeat steps for each module.
Take the Plagiarism Avoidance Test.

How do you progress through this tutorial?
View videos or read material in a module. Take a self-test after reading and viewing materials in a module. This self-test is for practice and taking it will open the next module. Repeat steps for each module, five modules in all. After viewing / reading the material in each module and taking the self-tests, take the Plagiarism Avoidance Test.

Speaker: Carlos Busso

Date: October 7

Time: 5:00 PM - 6:00 PM

Place: AER135 and https://wvu.zoom.us/j/9188836315

Abstract: The almost unlimited multimedia content available on video-sharing websites has opened new challenges and opportunities for building robust multimodal solutions. This seminar will describe our novel multimodal architectures that (1) are robust to missing modalities, (2) can identify noisy or less discriminative features, and (3) can leverage unlabeled data. First, we present a strategy that effectively combines auxiliary networks, a transformer architecture, and an optimized training mechanism for handling missing features. This problem is relevant since it is expected that during inference the multimodal system will face cases with missing features due to noise or occlusion. We implement this approach for audiovisual emotion recognition achieving state-of-the-art performance. Second, we present a multimodal framework for dealing with scenarios characterized by noisy or less discriminative features. This situation is commonly observed in audiovisual automatic speech recognition (AV-ASR) with clean speech, where the performance often drops compared to a speech-only solution due to the variability of visual features. The proposed approach is a deep learning solution with a gating layer that diminishes the effect of noisy or uninformative visual features, keeping only useful information. The approach improves, or at least, maintains performance when visual features are used. Third, we discuss alternative strategies to leverage unlabeled multimodal data. A promising approach is to use multimodal pretext tasks that are carefully designed to learn better representations for predicting a given task, leveraging the relationship between acoustic and facial features. Another approach is using multimodal ladder networks where intermediate representations are predicted across modalities using lateral connections. These models offer principled solutions to increase the generalization and robustness of common speech-processing tasks when using multimodal architectures.

Speaker Bio: Carlos Busso is an incoming Professor of the Language Technologies Institute at Carnegie Mellon University. He is currently a Professor at the University of Texas at Dallas’s Electrical and Computer Engineering Department, where he is also the director of the Multimodal Signal Processing (MSP) Laboratory. His research interest is in human-centered multimodal machine intelligence and application, with a focus on the broad areas of speech processing, affective computing, and machine learning methods for multimodal processing. He has worked on speech emotion recognition, multimodal behavior modeling for socially interactive agents, and robust multimodal speech processing. He is a recipient of an NSF CAREER Award. In 2014, he received the ICMI Ten-Year Technical Impact Award. His students received the 2015 third prize IEEE ITSS Best Dissertation Award (N. Li), and the 2024 AAAC Student Dissertation Award (W.C. Lin). He also received the Hewlett Packard Best Paper Award at the IEEE ICME 2011 (with J. Jain), and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the Best of IEEE Transactions on Affective Computing Paper Collection in 2021 (with R. Lotfian) and the Best Paper Award from IEEE Transactions on Affective Computing in 2022 (with Yannakakis and Cowie). In 2023, he received the Distinguished Alumni Award in the Mid-Career/Academia category by the Signal and Image Processing Institute (SIPI) at the University of Southern California. He received in 2023 the ACM ICMI Community Service Award. He is currently serving as an associate editor of the IEEE Transactions on Affective Computing and as a member of the IEEE Speech and Language Processing Technical Committee (2024-2027). He is a member of AAAC and a senior member of ACM. He is an IEEE Fellow, and ISCA Fellow.

Speaker: Arun Ross

Date: October 14

Time: 5:00 PM - 6:00 PM

Place: https://wvu.zoom.us/j/9188836315

Abstract: Biometrics is the science of recognizing individuals based on their physical and behavioral attributes such as fingerprints, face, iris, voice and gait. The past decade has witnessed tremendous progress in this field, including the deployment of biometric solutions in diverse applications such as border security, national ID cards, amusement parks, access control, and smartphones. At the same time, the paradigm of deep learning using deep neural networks is rapidly changing the landscape of biometrics. Despite these advancements, biometric systems have to contend with a number of challenges related to deep fakes, spoof attacks, and personal privacy. This talk will highlight some of the recent progress made in the field of biometrics; present our lab’s work on topics such as detecting physical and digital attacks, enhancing personal privacy, and generating synthetic biometric data; and discuss some of the challenges that have to be solved in order to deepen society’s trust in biometric technology.

Speaker Bio: Arun Ross is the Martin J. Vanderploeg Endowed Professor in the Department of Computer Science and Engineering at Michigan State University, and the Site Director of NSF’s Center for Identification Technology Research (CITeR). He conducts research on the topic of biometrics, privacy, computer vision and deep learning. He is a recipient of the JK Aggarwal Prize (2014) and the Young Biometrics Investigator Award (2013) from the International Association of Pattern Recognition for his contributions to the field of Pattern Recognition and Biometrics. He was designated a Kavli Fellow by the US National Academy of Sciences by virtue of his presentation at the 2006 Kavli Frontiers of Science Symposia. Ross is also a recipient of the NSF CAREER Award. Ross has advocated for the responsible use of biometrics in multiple forums including the NATO Advanced Research Workshop on Identity and Security in Switzerland in 2018. He testified as an expert panelist in an event organized by the United Nations Counter-Terrorism Committee at the UN Headquarters in 2013. In June 2022, he testified at the US House Science, Space, and Technology Committee on the topic of Biometrics and Personal Privacy. He is a co-author of the monograph “Handbook of Multibiometrics” and the textbook “Introduction to Biometrics”.

Speaker: Farshid Naseri

Date: October 21

Time: 5:00 PM - 6:00 PM

Place: https://wvu.zoom.us/j/9188836315

Abstract: In this talk, we’ll explore the fascinating complexity of lithium-ion batteries in electric vehicles (EVs). These systems are nonlinear, time-varying, and degrade over time, making their modeling a significant challenge. Traditional approaches rely on model-based techniques, but as battery behavior constantly shifts, accurate predictions become difficult. This is where we introduce an innovative approach that combines AI with model-driven methods to enhance the accuracy of battery state predictions. This hybrid approach leverages AI to identify patterns and trends in battery behavior that traditional models may overlook, allowing for more effective monitoring and management of battery health. Such advancements are critical not only for prolonging battery life but also for enhancing the efficiency and safety of EVs. I'll also provide a brief overview of key EU-funded projects and the broader European research landscape on battery technologies, showcasing how Europe is investing in this crucial field. Additionally, we’ll discuss hardware-in-the-loop prototyping, which plays a vital role in developing and refining battery management software—from data generation and modeling to software development, integration, testing, and final prototyping.

Speaker Bio: Farshid Naseri is a Ph.D. expert in vehicular and storage technologies, currently serving as a postdoctoral researcher at Aalborg University (AAU Energy), Denmark. He received the B.S.E.E. in Control Engineering from Shiraz University of Technology (SUTECH) in 2013 and the M.Sc. and Ph.D. in Electrical Power Engineering from Shiraz University, Shiraz, Iran in 2015 and 2019, respectively. In recent years, he has contributed to several large national and European R&D projects focused on development of high-performance battery and power electronics systems for electric vehicles including DeepBMS, HELIOS, HEROES, and iBattMan. He has received several prestigious awards in recognition of his groundbreaking research and innovation in electric vehicle technologies, including the EU’s flagship Marie Curie Fellowship and the INEF Excellence Award. His research interests encompass electric vehicles, battery systems, control systems, and power electronic systems design. Dr. Naseri is an active member of the IEEE Young Professionals, IEEE Vehicular Technology Society, and IEEE Industrial Electronics Society, where he has served as an organizer, reviewer, and contributor to several IEEE conferences and journals. Additionally, he is a board member of the Vehicle Engineering Section in Machines and has guest-edited for several journals, including Batteries.