Cybersecurity Research


CoVA CCI is conducting research that will lead to breakthroughs in cyber physical systems, contributing to the CCI mission of establishing Virginia as a global leader in secure CPS and in the digital economy. This research is focused in the areas of cyber physical systems (CPSS), 5G, and Artificial Intelligence in the Maritime, Defense and Transportation business sectors. CoVA CCI will partner with local cybersecurity businesses as well as researchers from across the Commonwealth to accomplish this goal by supporting three objectives.
CoVA CCI will create a secure shared research environment (COVA SHARE) for researchers, faculty and businesses to conduct cybersecurity research and instruction in state-of-art computer labs.
We will develop a 5G testbed to conduct research on the vulnerabilities and opportunities of 5G (and future generation) wireless communication technology.
CoVA CCI will sponsor cybersecurity research projects focused on CPSS, 5G, AI and other emerging fields with through collaborative research partnerships within CCI institutions and business partners.
FY 2025 Cybersecurity Research Projects
The research focus for FY 2025 is on Cybersecurity for AI and AI for Cybersecurity.
CCI Funded Projects
Project: Adaptive Intrusion Detection in IoT Networks Using LLM-Driven Behavioral
Project Team: Faryaneh Poursardar, ODU, Neda Moghim, ODU, Christo Kurisummoottil Thomas, Virginia Tech, and Walid Saad, Virginia Tech
Funding is provided by CCI.
Analysis and Deep Reinforcement LearningThis research project explores the integration of Deep Reinforcement Learning (DRL), Large Language Models (LLMs), neuro-symbolic AI, and wireless networking to create adaptive intrusion detection systems for Internet of Things (IoT) networks. The central research question focuses on developing resilient IoT systems capable of recovering swiftly from cyberattacks without degrading the user experience. To address this, the project introduces several key innovations.
First, an adaptive prompt-generation system is proposed using DRL to optimize LLM queries in real-time by tracking the evolving nature of cyberattacks. This system incorporates an evolving Retrieval-Augmented Generation (RAG) mechanism that retrieves relevant knowledge from scholarly sources, enabling LLMs to effectively formulate mitigation strategies against dynamic threats.
Second, the project seeks to improve LLM detection capabilities for complex attack scenarios, including Advanced Persistent Threats (APTs), zero-day exploits, and multi-stage attacks. A novel DRL-LLM framework is developed using neuro-symbolic AI to enhance generalizability and improve sample efficiency, evaluated against data-driven state-of-the-art systems. Lastly, resilience metrics are formulated to measure IoT network disruption times during various cyberattacks, with the aim of minimizing downtime. The system’s effectiveness will be demonstrated across different IoT domains—e.g. healthcare, smart homes, and industrial control systems—validating its ability to detect and mitigate attacks using the proposed resilience framework.
https://ws-dl.blogspot.com/2025/01/2024-01-27-llm-driven-behavioral.html
AI-Powered Cyber Defense: Leveraging Transformer Models and eXplainable
Reinforcement Learning Methods for Advanced Intrusion Detection and Response System
Project Team: Mohammad Ghasemigol, ODU, Daniel Takabi, ODU, Yuichi Motai, VCU, Simegnew Yihunie Alaba, VCU, and Michael Lapke, CNU
Funding is provided by CCI.
As cyber threats become increasingly sophisticated and frequent, the need for advanced Intrusion Detection and Response Systems (IDRS) is more critical than ever. These systems are essential for defending networks against external and internal intrusions by detecting potential threats and applying appropriate responses. However, a significant challenge faced by existing IDRSs is the overwhelming volume of alerts generated by IDSs, which makes manual response impractical. Additionally, the
effectiveness of automated IRS is often undermined by difficulties in accurately estimating response costs, assessing the network situation, and providing clear explanations for chosen responses. These challenges often lead to suboptimal responses, negatively impacting network performance and pushing administrators towards inefficient manual methods. To overcome these challenges, this proposal introduces a novel approach that integrates transformer encoder, decision transformer, and eXplainable Reinforcement Learning (XRL) methods to build an AIpowered IDRS.
The objectives include: 1) Developing a preprocessing module to normalize network traffic; 2) Designing a cutting-edge IDS utilizing transformer architecture for better handling of complex and multistage attacks; 3) Developing an automated IRS based on decision transformer to optimize responses dynamically based on real-time analysis; and 4) Leveraging XRL methods to enhance transparency and interpretability of intrusion responses. This proposal is highly relevant to the CCI and this call by advancing the development of AI-driven cybersecurity solutions that address real-world challenges in cyber defense.
All in One: A Multitask LLM-Based Vulnerability Detector with Conversational Assistance
Project Team: Huaie Shao, W&M, Yue Xiao, W&M, and Xiaokuan Zhang, GMU
Funding is provided by CCI.
Software vulnerabilities pose serious risks to systems, potentially leading to crashes, data loss, and security breaches. Classical static analysis-based vulnerability detection tools often suffer from high false positive or false negative rates and struggle to generalize to new types of vulnerabilities. To address this, some studies introduce deep learning methods, but they can only identify whether a code snippet is vulnerable without pinpointing vulnerable functions or providing explanations. Recently, a few works have adopted large language models (LLMs) to identify vulnerabilities using prompt engineering or instruction fine-tuning. However, existing LLM-based methods limit their focus to specific aspects like detecting vulnerability types or locations.
In this project, we will develop a multitask LLM-based vulnerability detector capable of detecting, pinpointing, and explaining vulnerable functions along with providing fix suggestions. The proposed work offers two innovative tasks: (i) Create a comprehensive dialogue-based vulnerability benchmark encompassing a wide range of tasks, including vulnerability type detection, vulnerability explanation, and location; (ii) Develop a knowledge-guided multitask LLM-based detector using instruction fine-tuning. Finally, the proposed LLM-based vulnerability detector will be evaluated on both the constructed benchmark dataset and software vulnerability in real-world applications. In sum, this project will fundamentally enhance the security of software, which will benefit billions of users. Moreover, it will advance the interdisciplinary research between machine learning, software engineering, and security.
COVA CCI Funded Projects
Towards a Knowledge-Guided Foundation Model for Long-Tail Anomaly Detection in Network Traffic
Project Team: Gang Zhou, W&M, Huajie Shao, W&M, and Bo Ji, Virginia Tech
Funding is provided by COVA CCI.
Network traffic anomalies such as attacks and failures pose a serious threat to the security of computer networks and systems like mobile devices and cloud computing, leading to the loss of intellectual property, financial resources, and customer data. Thus, identifying and diagnosing network traffic anomalies remains an important yet challenging problem. While classical machine learning techniques have been introduced to detect traffic anomalies based on their features, these methods struggle to generalize to unknown anomalies. To overcome this problem, recent studies have adopted pre-training foundation models (FMs) for network traffic anomaly detection. However, existing approaches do not consider domain knowledge or address the long-tailed traffic data issue during model training.
To address these challenges, this project seeks to develop a knowledge-guided foundation model for network traffic anomaly detection when data is under long-tailed distribution. The proposed work offers two key innovations: (1) develop a knowledge-guided foundation model to improve the generalization capability of traffic anomaly detection, and (2) Adopt knowledge-guided data augmentation and semantics-based data selection to mitigate the long-tail problem during fine-tuning. The proposed project will be evaluated using real-world traffic datasets, such as IoT attacks, Android Malware, and DDoS attacks. In sum, the developed detection model is expected to timely identify unknown traffic anomalies, benefiting millions of users and companies. Moreover, the project will engage students, especially women and those from underrepresented groups, in pioneering research focused on foundation models for anomaly detection in cybersecurity.
Secure and Privacy-Preserving Decentralized AI through Model Refine and Fully Homomorphic Encryption
Project Team: Qianlong Wang, ODU, Sachin Shetty, ODU, and Changqing Luo, VCU
Funding is provided by COVA CCI.
With the emergence of the Internet of Things (IoT), data has been generated in a distributed manner. Hence, distributed learning algorithms have been studied, where the learning process is conducted in a distributed fashion that can effectively utilize the distributed/decentralized data resource. However, the current distributed learning systems over loT are facing unavoidable challenges, which are mainly categorized into security, privacy, compatibility, and efficiency. First, the success of such learning systems heavily depends on the integrity of both the central server and data holders. Second, the loT and Cyber-Physical Systems (CPS) usually hold heterogeneous models and data, which require the learning systems’ high compatibility to utilize these resources effectively. Third, security and privacy are becoming major concerns when more users contribute to the learning process to enable reliable learning performance. The proposed research aims to exploit potential system vulnerabilities in the decentralized learning framework, develop attack and defense mechanisms, and theoretically analyze the system’s resilience. Additionally, the proposed research aims to enable a privacy-preserving decentralized learning process, which is to propose a privacy-preserving scheme to protect sensitive data, and theoretically analyze and prove the privacy guarantee. Moreover, this project lays the groundwork for system research in dense IoT applications supported by decentralized learning. It generates preliminary experimental data necessary to develop an independent and competitive research agenda.
Cyber-Attack Resilient Distributed and Explainable AI with Zero Trust Architecture
Project Team: Rafael Diaz, ODU, Bikash Chandra Singh, ODU, and Zeb Bowden, Virginia Tech
Funding is provided by COVA CCI.
This research project aims to develop groundbreaking theories and innovative techniques to design stakeholder-centric, secure data-sharing and analytics systems. In particular, the project focuses on devising algorithms and frameworks that integrate federated learning (FL) and artificial intelligence (AI) to address the security challenges posed by big data collaborative supply chains. These supply chains involve multiple stakeholders, each bound by stringent data-privacy and confidentiality requirements, creating a complex environment for secure collaboration.
The central objective is to leverage distributed AI through federated learning by developing a novel privacy-preserving technique that enables secure data analysis without the need for raw data sharing. This will empower stakeholders to train AI models collaboratively while maintaining the confidentiality of their sensitive information. The proposed approach will ensure that global model updates are validated through a zero-trust architecture, providing an additional layer of security and reducing vulnerabilities. The zero-trust model will enforce strict validation protocols, ensuring continuous protection for local models and mitigating the risk of unauthorized access or breaches.
By addressing the dual challenges of data privacy and secure collaboration, this project will provide a robust solution for industries reliant on multi-stakeholder data ecosystems, such as supply chain management, healthcare, and finance. The anticipated outcomes will enable organizations to unlock the full potential of big data analytics while maintaining compliance with data protection regulations, fostering greater trust and cooperation among stakeholders.
Toward Integrated Security and Privacy Solutions for Multi-Modal AI
Project Team: Lusi Li, ODU, Daniel Takabi, ODU, Ruin Ning, ODU, and Yixuan (Janice) Zhang, W&M
Funding is provided by COVA CCI.
Traditionally, security and privacy issues of artificial intelligence (AI) systems have been treated as separate concerns, each addressed through different techniques. However, recent research reveals a significant interdependence between these two issues: efforts to enhance security can inadvertently compromise privacy and vice versa. This delicate interplay becomes even more critical with the advent of multi-modal AI systems. In these complex systems, interdependencies between different data modalities can exacerbate trade-offs between security and privacy, amplifying vulnerabilities in both areas. To address these challenges, we will conduct a comprehensive investigation into the complex interplay between security and privacy in multi-modal AI systems. This includes systematically examining the interdependencies between these issues and striving to understand the mechanisms by which enhancements in one area affect the other. Building upon these insights, we will explore innovative countermeasures and solutions that provide trade-offs between security and privacy. Our objective is to develop a balanced and integrated framework that cohesively addresses both security and privacy concerns. This project will enable the creation of AI systems that are not only robust against external threats but also capable of safeguarding user privacy. This will accelerate the safe and responsible deployment of AI technologies in critical applications, mitigate risks, and improve trust in AI systems in various sectors.
Leveraging Large Language Models for Enhanced Software Security Analysis and Malware Detection
Project Team: Yanhai Xiong, W&M, and Kun Sun, GMU
Funding is provided by COVA CCI.
This project proposes an innovative framework leveraging Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) techniques to enhance software security analysis and malware detection for Android applications. The proliferation of Android apps has led to an increase in potentially harmful software, making efficient and accurate security analysis crucial. Current methods rely heavily on human experts, which is time-consuming and limited in scope. While machine learning approaches show promise, they often lack explainability, hindering result verification.
The proposed framework aims to overcome these limitations by integrating LLMs with RAG systems to analyze Android application behavior. This approach will focus on identifying call graphs and data-flow graphs related to security queries, as well as isolating malicious code snippets from Android project source code. By incorporating RAG techniques, the framework addresses the challenge of LLM “hallucinations” in domain-specific tasks, enhancing the reliability and accuracy of generated analyses.
The project’s intellectual merit lies in its potential to revolutionize software security approaches by improving scalability, accuracy, and efficiency in Android application security analysis. By developing a system that can effectively extract hidden information from code and provide explainable results, this research contributes to creating more resilient and secure digital ecosystems. The multi-faceted evaluation approach, including collaboration with industry experts, ensures rigorous testing and validation of the project outcomes, further enhancing its potential impact on the field of software security.
Study of Adversarial Attack Strategies on Autonomous Vehicles equipped with LiDAR Sensors
Project Team: Abhishek Phadke, CNU, and Pratip Rana, ODU
Funding is provided by COVA CCI.
Adversarial attacks are a serious threat to the reliability of autonomous vehicles. Autonomous vehicles with LiDAR systems are particularly vulnerable to object spoofing and vanishing. Fake object injection is a serious threat that can trick the system and potentially cause damage, even accidents. The targeted vanishing of the LiDAR cloud points is another successful attack that makes the deep neural networks model misclassify the 3D objects. The first part of the project aims to study such adversarial attack strategies on autonomous vehicles to identify which types of attacks are successful. We will examine state-of-the-art deep neural network-based 3D object detection models, such as voxel-based models, point-net base models, and graph neural network-based models, and their susceptibility to these attacks. The last part of this project aims to identify defense techniques against adversarial attacks and identify the architectural changes in the deep neural networks model and LiDAR sensors to improve the security and reliability of these autonomous vehicles.
Enhancing the Security of Large Language Models Against Persuasion-Based Jailbreak Attacks in Multi-Turn Dialogues.
Project Team: Javad Rafiei Asl, ODU, Shangtong Zhang, UVA, and Prajwal Panzade, ODU
Funding is provided by COVA CCI.
This research aims to address the vulnerabilities in Large Language Models (LLMs) posed by multi-turn persuasion-based jailbreak attacks, where attackers exploit conversational manipulation to bypass safety protocols. These attacks mirror human-like interactions, making them particularly dangerous as they leverage the model’s understanding of natural language and context to generate harmful outputs. Current defenses often focus on single-turn adversarial attacks, leaving a critical gap in addressing the iterative, multi-turn strategies that attackers use in real-world scenarios. Our research will develop a comprehensive defense mechanism that evolves alongside these prolonged interactions, continuously learning and adapting to new persuasive strategies. This work involves building a dataset of persuasive attack techniques, simulating multi-turn adversarial models, and implementing reinforcement learning-driven defensive architectures. The proposed system will dynamically adjust its responses to protect LLMs from manipulation while maintaining conversational naturalness. The outcomes will contribute to AI safety across sectors such as cybersecurity, finance, and healthcare, where LLMs are widely deployed. Furthermore, the project aligns with the mission of the Commonwealth Cyber Initiative (CCI) by advancing research on adversarial attack modeling and AI defense strategies, fostering Virginia’s leadership in cybersecurity innovation.
FY 2024 Cybersecurity Research Projects
COVA CCI funded four projects during FY 2024. Three of these were part of the CCI Research in Supply Chain Cybersecurity CFP and one from the Inclusion and Accessibility in Cybersecurity CFP.
Project: Enhancing Security of Software Supply Chain – A Focus on AI/ML
Project Team: Mohammad GhasemiGol, ODU and Daniel Takabi, ODU
Project Abstract: With the advances in Artificial Intelligence (AI), Machine Learning (ML) models are being integrated into a variety of systems across a wide range of applications, including the military, healthcare, finance, smart cities, manufacturing, agriculture, transportation, and logistics. The rapid proliferation of AI/ML in software development has introduced a critical concern in software supply chain security. The AI/ML pipeline is fraught with a multitude of threats in the development and deployment process including data poisoning, backdoors, trojans, evasion attacks, and privacy attacks. Despite all these threats, pre-trained AI/ML models are widely used in AI/ML-integrated systems without any safety or security analysis. The goal of this research is to investigate AI/ML pipeline threats to enhance software supply chain security. For this purpose, a comprehensive framework will be developed to assess vulnerabilities, detect, and mitigate cyberattacks in AI/ML modules. In particular, a Fuzz testing method will be developed to discover existing vulnerabilities, and a novel attack graph will be developed to model the relationships between vulnerabilities. A new rule-based vulnerability detection system will be proposed to detect common attacks such as data poisoning and model evasion by actively monitoring each component of the AI/ML pipeline. Moreover, a mitigation framework will be proposed to define proactive and reactive strategies to counter cyberattacks targeting AI/ML models.
This research holds significant relevance to the Commonwealth Cyber Initiative (CCI) by aligning with its core objectives of advancing supply chain cybersecurity. It contributes to knowledge dissemination, potential commercialization, economic development, and educational outreach by addressing emerging cybersecurity threats related to AI/ML systems and developing innovative solutions.
Project: A Paradigm Shift: Innovating Supply Chain Security for AI-Assisted Devices
Project Team: Rui Ning, ODU, Yuhong Li, ODU, Peng Jiang, ODU, Xinwei Deng, VT
Project Abstract: This project aims to address the pressing challenges associated with securing supply chains tailored for AI-assisted devices (AADs). As the integration of AADs becomes increasingly prevalent across various sectors, the complexity and vulnerability of their supply chains have surged. This proposal outlines a holistic approach to fortify these supply chains, encompassing the development of a dedicated testbed for in-depth evaluation, the creation of a multi-dimensional risk assessment model to pinpoint supplier vulnerabilities and their cascading effects, and the strategic redesign of supply chain architectures based on comprehensive risk evaluations. Through these measures, the project aims to pioneer advancements in the security and resilience of supply chains designed for AADs, catering to the distinct challenges posed by the widespread adoption of AI technologies.
Project: Advancing Supply Chain Security through Quantum Computing: A Framework for Rapid Optimization
Project Team: Qun Li, WM
Project Abstract: The Russia-Ukraine war and the COVID-19 pandemic have clearly revealed the vulnerabilities inherent in global supply chains, underscoring the pressing need for resilient and adaptable solutions. Instead of exclusively focusing on individual technological security measures for supply chains, our attention shifts to addressing the complex task of re-establishing supply chains after disruption. The collapse of a supply chain inflicts significantly greater harm and requires rapid resolution, repair, and reconstruction. To address these issues, supply chain security necessitates a comprehensive approach encompassing systematic planning and optimization.
This proposal seeks funding from the CCI to drive progress in the field of supply chain security by harnessing the potential of quantum computing technologies. We have developed an approach to program quantum computers, finely tuned for tackling the intricate optimization challenges within supply chain management. These challenges span local inventory control optimization and global transportation optimization for vehicle routing, presenting exceptional complexity due to their intricate structures and numerous unknown variables. Quantum computers present an efficient solution to these complexities. This algorithm holds the promise of revolutionizing supply chain logistics, ensuring smoother operations in the face of wide-area disruptions.
Inclusion and Accessibility in Cybersecurity
Project: Tackling Dark Pattern-Induced Online Deception of People with Visual
Disabilities
Project Team: Vikas Ashok, ODU and Faryaneh Poursardar, ODU
Project Abstract: The project seeks to uncover and investigate the different types of “dark-pattern” web-interface designs that can potentially deceive blind and low-vision (BLV) persons online and consequently impact their web browsing experience and privacy. Unlike sighted users, BLV users must rely on third-party assistive technologies such as a screen reader or a screen magnifier to interact with webpages. As webpages are typically not designed for assistive technology-driven interaction, certain kinds of content layouts and formatting methods (intentional or unintentional) can covertly manipulate BLV users into performing actions that can cause significant negative financial, cognitive, temporal, and privacy impacts for these users. As a first step towards addressing this problem, in this seed project, we will: (i) Develop a taxonomy of deceptive BLV-specific web dark patterns by analyzing data from an interview study with 50 diverse group of BLV participants; and (ii) Build a novel representative dataset containing BLV-specific dark-pattern examples collected by manually analyzing a wide range of websites belonging to different domains. The insights from the study together with the dark-pattern dataset from this project will serve as the necessary foundation to fuel further research in this area, especially in the design and development of novel intelligent interactive solutions that can help BLV users avoid dark-patterns in websites.
FY 2023 Cybersecurity Research Projects
2023 Maritime Research Project Article
Project: Applying NIST SP 800-33 Risk Assessment methodology to produce cyber-hardened 5G communications capabilities for autonomous maritime platforms
Project team: Yiannis Papelis, ODU, Ahmet Saglam, ODU, and Casey Batten, SimIS, Inc.
Project Abstract: SimIS Inc., a Portsmouth-based company, designs and develops a family of marine autonomous vehicles. One of these vehicles is the RiverScout, a two-man portable autonomous surveillance platform equipped with sensors for surveillance and autonomous operations. To meet expanding DOD operational mission requirements, the RiverScout requires a high-bandwidth, long-range, cyber-encrypted data link using the new global wireless 5G standard. VMASC is an enterprise research center engaged in multidisciplinary applied research to integrate new technologies into maritime platforms and develop novel maritime autonomy solutions. SimIS’ RiverScout system consists of hundreds of micro-electronic assets (components and subcomponents) which are vulnerable to threats that could have an adverse effect on a RiverScout mission. CMMC guides the implementation of a “zero-trust” framework in system design and production to validate the security of system assets (components). We will apply the SP 800-30 Risk Assessment process in RiverScout data-link design and extend it to support maritime systems. Our Tidewater maritime partner Fairlead Boatworks provides commercial vessel system design and integration expertise that will enable the maturation and extension of a RiverScout-based risk assessment framework to support maritime industry-wide risk assessment/CMMC requirements. Additionally, the commercialization of the RiverScout capability must ensure CMMC cybersecurity Risk Assessment (RA) compliance – a core SimIS competency. The VMASC R&D capabilities teamed with SimIS maritime platform development and SP 800-30 Cyber Risk Assessment expertise, and FairLead commercial maritime manufacturing skills establishes an experienced team for the COVA-CCI project. The goal of this joint project is to design and integrate an autonomous communications system for the RiverScout that implements SP 800-171R2 and SP 800-30 Risk Assessment processes that assess maritime platform asset threat vulnerability (through supply chain activities and external service providers) and supports the accelerated commercialization and SP 800-30 compliance of Tidewater maritime industry products and services.
Project: Exploring challenges and adoption enablers of cybersecurity maturity model certification in maritime industries
Project team: Chon Abraham, W&M, Tracy Gregorio, G2Ops
Project Abstract: Implementation of cybersecurity guidance and governance is as complex as cyber threats themselves. This research builds on prior research funded by the CCI Experiential Learning Grant that resulted in preliminary systems analysis and design (SA&D) of the Cybersecurity Maturity Model Certification Assessment Assistant (CyMMCAA) tool for meeting Cybersecurity Maturity Model Certification (CMMC) compliance requirements. Research conducted utilized the NIST SP 800-171, NIST SP 800-171A, and other related documents that define CMMC assessment. The SA&D prior effort involved process analysis and a case study approach to map and refine manual CMMC assessment performed by G2 Ops Inc. and Registered Provider Organization (RPO) as a prime vendor for ship modernization and a real client of G2 Ops as an Organization Seeking Certification (OSC). The process illuminated the challenges of compliance by metrics of time and cost. The RPO in the prior study is the project partner for this current proposal to continue development and refinement of the CyMMCAA tool, consolidate insight for its use in CMMC compliance via a cyber roadmap, and provide guidance for cyber risk valuation costs for which the tool and roadmap can aid in avoiding.
Project: Navigating cybersecurity compliance challenges for the maritime industry in southeast Virginia
Project team: Mohammad Almalag, CNU, Michael Lapke, CNU, Christopher Kreider, CNU, and Leigh Armistead, Peregrine Technical Solutions, LLC
Project Abstract: This project intends to develop a screening process to help the Maritime Industry in Hampton Roads to ensure compliance with a variety of cybersecurity requirements. It is led by Christopher Newport University and is teamed with Dr. Leigh Armistead at Peregrine Technical Solutions. To facilitate this work, the team will reach out to a large number of shipbuilding, ship repair, and ship modernization industries in Tidewater, to perform requirements gathering of all of the required cybersecurity regimes to include but limited to:
DoD-mandated Cybersecurity Maturity Model Certification (CMMC).
National Institute of Standards and Technology (NIST) controls
International Maritime Organization – MSC-FAL.1/Circ.3 Guidelines on maritime cyber risk management and Resolution MSC.428(98) – Maritime Cyber Risk Management in Safety Management Systems.
This will be done by a series of surveys to the maritime industry partners and the result will be a recommendation of actions that they should take to become compliant.
Project: Automated CMMC compliance for shipbuilding
Project team: Safdar Bouk, ODU and Andrew Mixon, Chitra
Project Abstract: Shipbuilding companies supporting Department of Defense (DOD) contracts with controlled unclassified information (CUI) will require certification under Cybersecurity Maturity Model Certification (CMMC) 2.0. The current lack of standard cybersecurity practices for DOD contractors inhibits cyber readiness for all DOD organizations. CMMC 2.0, a process managed and controlled by the DOD, ensures contractors are compliant with requisite cybersecurity requirements. There is currently a lack of effective tools, technology, and training to assist companies in their effort to achieve certification under CMMC 2.0. The existing tools and training services lack inherent simplicity of use and require operating personnel to have a strong cybersecurity background through the CMMC process. Chitra Productions, LLC (CHITRA), a woman-owned small business founded in 2008 in Virginia Beach, Virginia, has supported several shipbuilding, ship modernization, and ship maintenance initiatives in the Coastal Virginia area, including cybersecurity, maintenance, training, and engineering support. Chitra has recently developed and engineered a software tool that facilitates cybersecurity compliance within the Risk Management Framework (RMF) for systems and software in DOD organizations. Chitra is currently developing similar software to facilitate efficiency within the CMMC process for contract companies that support shipbuilding, ship modernization, and ship maintenance within the DOD and maritime industry. We will collaborate with the ODU cybersecurity research team and ManTech Advanced System International lnc.’s shipbuilding cybersecurity team to develop the CMMC compliance tool for the shipbuilding industry.
Project: Maritime cybersecurity maturity model certification domain handbook.
Project team: Sachin Shetty, ODU, Warren Bizub, SimIS, Inc., and Michael Humprey, SimIS, Inc.
Project Abstract: SimIS Inc., a Portsmouth, VA-based Capability Maturity Model Integrated (CMMI) level 3 accredited IT services and maritime platform production company, is implementing the Cybersecurity Maturity Model Certification (CMMC) 2.0 controls to support our existing customers. We deliver technology solutions for land and sea autonomous platforms. CMMC compliance goals and best practices require a thorough analysis and implementation of secure IT controls for existing maritime IT network capabilities. Maritime autonomous industry cybersecurity staff lack experience in the elevated standards prescribed by CMMC. The subject matter experts for compliant CMMC deployment are mid to high level cybersecurity engineers – not the front-line Cyber-IT technician. The experience gap between cybersecurity engineers and front-line Cyber-IT technicians’ results in compliance challenges for many of the current generation of Cyber IT technicians (System Admins, Network Engineers, etc.). The SimIS proposed project goal is to create a Maritime CMMC Domain Handbook based on the CMMC version 2.0 level 2 published system controls (levels most applicable to maritime industry) with implementation guidelines for the 14 individual control domains. Each domain playbook will provide detailed analysis, planning, and implementation roadmaps for each of the 14 domains and will be written in language suitable for maritime staff with technical level implementation knowledge. We will collaborate with our ODU cybersecurity partners coupled with Fairlead to provide translation from maritime engineering to cybersecurity engineering vocabulary by leveraging our extensive CMMC experience to translate the considerable nesting of cybersecurity and CMMC governance into a clear, scalable and standards based Maritime CMMC Domain Handbook.
Project: Spotlighting and mitigating cyber attacks in AIoT-enabled maritime transportation systems.
Project Team: Yi He, ODU, Rui Ning, ODU, Yuhong Li, ODU, Peng Jiang, ODU, and Leigh Armistead, Peregrine Technical Solutions LLC
Project Abstract: The increasing adoption of Artificial-Intelligence-of-Things (AIoT) in maritime transportation systems (MTS) has the potential to bring significant benefits, including increased efficiency and safety. However, the integration of AIoT also introduces new vulnerabilities that can be exploited by the unprecedentedly evolving cyber threat actors. In this project, we strive to help ensure the safe and secure integration of AIoT in the maritime transportation industry, enabling it to realize the full potential of this technology without exposing itself to undue risks. To that end, we will spotlight the specific cybersecurity challenges faced by AIoT-enabled MTS and propose strategies for mitigating these risks. A comprehensive set of penetration tests will be tailored and performed on an MTS testbed to enable experiments and analyses of real-world cyberattacks on AIoT-enabled maritime transportation. Based on the experimental results, we will develop two defense models for improving the cybersecurity of the AIoT-enabled MTS: one can defend against the neural backdoor attacks that target on its multi-modal data inputs, and the other can detect the malicious signals hidden behind the background traffic of a complex communication network. These defense models will be designed to be practical and achievable for industry stakeholders, with a focus on the alignment with the Cybersecurity Maturity Model Certification (CMMC) program to ensure their implementation has minimal disruption to current maritime operations. Future proposals will be developed based on the project outcomes to solicit federal fundings from NSF and DoD to sustain the research topic.
FY 2022 Cybersecurity Research Projects
Project: Developing A Smart City Virtual Lab to Support CPS Experiential Learning
Project Team: Murat Kuzlu, ODU and Sherif Abdelwahed, VCU
Project Abstract: In this project, our team will develop a virtual smart city lab environment, called VirtualLab@OpenCity, which engages researchers, students, and companies with smart city challenges, such as automation, data analysis, service reliability and sustainability. VirtualLab@OpenCity will provide an experimental environment using a standardized service that supports remote connectivity, data collection, visualization, analysis, resource management and control. VirtualLab@OpenCity aims to build Virginia’s cyber-physical systems (CPS) workforce with hands-on-experience on new technologies that ultimately lead to innovative smart city solutions. This project will contribute to positioning Virginia as a global leader in secure and trustworthy cyber-physical systems by (a) providing students, researchers, and developers a virtual ecosystem of advanced CPS technologies, (b) providing guidance and support to employ advanced technologies and innovative management systems for ongoing and future smart city plans and (c) foster fruitful collaboration between academia and industry to build a Commonwealth-wide smart city innovation workforce.
Complete Proposal: CV-008-Kuzlu_COVA_CCI_21_Final
Project Presentation: Developing a SmartCity Virtual Lab to Support CPS Experiential Learning (Mural Kuzlu)
Project: Comprehensive Assessment and Diagnostics for Federated AI Algorithms in Cyber-Physical Systems
Project Team: Rui Ning, ODU, Jiang Li, ODU, Chunsheng Xin, ODU, Xinwei Deng, VT, Yili Hong, VT, and Luara Freeman, VT.
Project Abstract: Federated Artificial Intelligence (AI) is becoming a critical part of cyber physical systems (CPS) in modern maritime, defense and transportation industry, with its game-changing capability for handling large volumes of data and making collaborative complex decisions in support of self-control and self-actuation systems. While the Federated AI is actively
integrated into CPS applications, its malfunction can cause catastrophic failure or even be life-threatening for security-essential and safety-critical CPS such as in transportation and defense. Worse yet, as the Federated AI system incorporates AI and distributed devices, it inevitably introduces heterogeneity, randomness, contamination. Specifically, local data of different participants can be noisy and imbalanced, resulting in performance degradation. Moreover, it is also vulnerable to data poisoning attacks. The overarching
goals of this project include (1) establishing a design of experiments (DoE) framework to enable systematical investigation of the security and robustness of the Federated AI system; (2) investigating, assessing and unveiling characteristics of Federated AI models under different data imperfections; (3) developing effective schemes to comprehensively
diagnose given Federated AI models for potential data imperfections; (4) developing experimental environment for secure and robust Federated AI research. The project will also develop training modules of secure and robust Federated AI, aiming to prepare students and practitioners with advanced skills to succeed in a cybersecurity career. Overall, the proposed work will lead to enabling technologies for secure and robust Federated AI systems, accelerating their development and broadening their adoption in various application domains, especially the transportation, defense, and maritime sectors.
Complete Proposal: CV-006-Ning-COVA CCI Cybersecurity Research and Innovation_Rui_updated
Project Presentation: Ning_Presentation Rui Ning
Project: CIVIIC: Cybercrime in Virginia: Impacts on Industry and Citizens
Project Team: Randy Gainey, ODU, Tancy Vandecar-Burdin, ODU, Jay Albanese, VCU, James Hawdon, VT, Katalin Parti, VT, and Thomas Dearden, VT
Project Abstract: Victimization from cybercrime is a major concern in Virginia, the US, and the world. It is estimated that in 2020, cybercrime resulted in Americans losing an estimated $4.2 billion (FBI 2021). Yet, precise measurement and understanding of the nature of the problem, methods, types, and targets is lacking. While the FBI maintains the Internet Crime Report (IC3), these data are limited to only those crimes reported by victims,
which is only a small fraction of the cybercrimes that occur. While a few national citizen and business surveys have been conducted on specific types of cybercrime, the samples have been small, and there is reason to believe their findings may not represent the experience in the Commonwealth. Virginia presents a unique intersection of cyber physical systems with its large workforce in the maritime, defense, and transportation sectors, combined with an educated and mobile workforce making it a uniquely targeted area compared to many other states. This project will create, deploy,
analyze, and report on a statewide cybercrime survey of both citizens and businesses. A study and analysis specifically focused on Virginia will enable the delineation of the highest priority threats, identify cybercrime methods used, and provide an assessment of geographic, demographic and industry variation in victimization across the state. The project will provide baseline knowledge and data for future policy, research, and interventions to reduce exposure to cyber victimization in the Commonwealth.
Complete Proposal: CV-004-Gainey-CIVIIC_CCI_track2_FINAL
Project Presentation: Cybercrime in Virginia_ppt_6.2022
Final Survey Report: Survey and Final Report

Project: A Real-Time Dependency Network Approach to Quantifying Risks and
Ripple Effects from Cyberattacks in Shipbuilding and Repair Supply Networks
Project Team: Rafael Diaz, ODU and Helen Shen, UVA
Project Abstract: The evolution of defense shipbuilding supply networks toward digital environments increases operational complexity and requires reliable communication and coordination to regulate information exchange. As workers and suppliers transition to digital platforms, interconnection,
information transparency, and decentralized decisions become prevalent. The appearance and extensive use of these digital platforms inexorably increase their exposure to cyberattacks. Unfortunately, the effects of a systematic cyberattack on one or more nodes belonging to the shipbuilding supply network (e.g., Colonial Pipeline) are unknown. This collectively may represent
a substantial source of disruption. Cybersecurity protection of these networks requires a systemic approach to evaluate their vulnerability and understand ripple effects. However, current evaluation technologies and techniques are primarily applied to individual nodes or firms (if they are applied at all) and commonly lack systemic perspectives that consider overlapping risks and tiered hierarchies. To overcome these limitations, we propose developing a cybersecurity supply network Artificial Intelligence (A.I.) framework that enables characterizing and monitoring shipbuilding
supply networks and determining ripple effects from disruptions caused by cyberattacks. By representing and replicating the collective behavior of relevant shipbuilding supply network nodes, shipbuilders can monitor and measure the impact of cybersecurity disruptions and test the reconfiguration options that minimize the detrimental effects on the supply network. This
framework extends a novel risk management framework developed by Diaz and Smith (2021) and Smith and Diaz (2021) that considers complex tiered networks and systemic hypervulnerabilities (COVA CCI – 2021ODU-06.005) and is currently tested in the port security cyber physical setting.
Complete Proposal: CV-002-Diaz-210682_Diaz, Rafael
Project Presentation: An Artificial Intelligence Approach to Assess Shipbuilding and Repair Supply Networks (Rafael Diaz)
Project: Towards Trustworthiness in Autonomous Vehicles
Project Team: Evgenia Smirni, W&M and Homa Alemzadeh, UVA
Project Abstract: Autonomous vehicles (AVs) are one of the most complex software-intensive Cyber-Physical Systems (CPS). In addition to the basic car machinery, they are equipped with driving assistance mechanisms that use smart sensors and machine learning (ML) for environment perception, pathfinding, and navigation. Even though tremendous progress has been made in advancing the safety and security of AVs, they are shown to be vulnerable to accidental and malicious faults that negatively affect their perception and control functionality and result in safety incidents. Recent works have highlighted two major challenges in safety validation and assurance of AVs: (i) With the increasing use of specialized hardware accelerators (GPUs) for running ML-based perception algorithms, AV control systems have become susceptible to transient faults (soft errors) that can result in erroneous ML inference and unsafe decision making and control. (ii) Safety assurance for AVs requires testing their resilience by identifying and simulating realistic safety-critical fault and attack scenarios by mining a tremendous fault space. To address these challenges, this project brings together a team of experts in GPU and CPS resilience from two CCI nodes to develop a holistic approach for end-to-end resilience assessment of AVs. We combine strategic fault injection at both hardware accelerator and controller software levels to assess the sensitivity of the ML components and control system to accidental or malicious faults and identify critical components and system states. The results from this project will make a firm step towards achieving trustworthiness in autonomous vehicles.
Complete Proposal: CV-007-Smirni-COVA_CCI_2021_AV_WM-UVA
Project Presentation: Towards Trustworthiness in Autonomous Vehicles (Evgenia Smirni)
FY 2020-2021 Cybersecurity Research and Innovation Projects.
COVA CC released a Request for Proposals in March 2020 for researchers to conduct fundamental research leading to breakthroughs in CPSS. A total of five projects were selected for this first round of cybersecurity research funding.
Leveraging AI and Machine Learning to Develop New CPSS and Workforce Development Solutions
Project Abstract: Data breaches and cyberattacks are now a daily reality for entities across the globe. As these attacks increase in frequency and sophistication, organizations are faced with an increasing shortage of quality trained cyber security professionals with the most current knowledge to meet this growing crisis. The proposed research & development project will seek to determine how to effectively automate the match of candidates to cyber jobs and associated training using Artificial Intelligence, analytics and novel data collection methodologies. The project will leverage crowd-sourced content and input to surface new approaches to developing disruptive cyber-physical systems through the use of workforce assessment and experiential education via a secure platform (www.idispla.org) and planned cloud-based cyber Insights Engine. The proprietary Insights Engine will apply Artificial Intelligence and Machine Learning, to power talent aptitude assessment and identification, and deliver smart training to match and develop personnel specifically for the roles for which they are best suited. This first ever effort will be led by researchers at Old Dominion University, supported by a team which includes: Melvin Greer, Chief Data Scientist, Americas, Intel Corp; and Dr. Nibir Dhar, Chief Scientist at Army Night Vision & Electronic Sensors Directorate; and Carlos Rivero, Chief Data Officer for the Commonwealth of Virginia, and CivilianCyber of Richmond, Virginia.
Project Team: Dr. Deri Draper, Old Dominion University, ddraper@odu.edu; Bobby Kenner, CivilianCyber, bobby@civiliancyber.com.
Encouraging Positive Changes in Cyber Hygiene Behaviors and Knowledge in the Department of Defense.
Project Abstract: Requesting funding to develop SCORE to identify poor cyber-hygiene behavior, and to design an interface that effectively increases users’ awareness of their cyber risk. The system will alert users to at-risk behaviors and creates reports. The goal is to show SCORE can raise the awareness of cybersecurity policy violations. Both the technology and user experience will be developed to increase users’ cyber awareness, knowledge, and willingness to comply with cybersecurity policies.
Project Team: Dr. Jeremiah Still, Old Dominion University, jstill@odu.edu and Mike Ihrig, MI Technical Solutions.
Explore Privacy-Preserving in Deep Image Retrieval Systems
With the rapid growth of visual content, deep learning to hash is gaining popularity in the image retrieval community recently. Although it facilitates search efficiency, privacy is also at risks when images on the web are retrieved at a large scale and exploited as a rich mine of personal information. An adversary can extract private images by querying similar images from the targeted category for any usable model. Existing methods based on image processing preserve privacy at a sacrifice of perceptual quality. In this research, we propose a novel privacy-preserving mechanism based on adversarial learning to “stash” private images in the deep hash space while still maintaining perceptual similarity in both white-box and black-box settings. The research is expected to establish and deepen multi-institutional collaboration between William Mary, ODU and Hampton University and provide opportunities to include undergraduate and minority students into AI and security research. The ubiquityof AI technology brings both opportunities and challenges: offering convenience at the expense of our privacy. This research targets at a unique angle of the pervasive privacy challenges on the Internet and exploits a new vulnerability of AI algorithms to preserve privacy. If successful, the fundamental algorithms and tools provided will be transformative to enhance the ongoing research of AI security.
Project Team: Dr. Cong Wang, Old Dominion University, c1wang@odu.edu, Dr. Qun Li, William and Mary, liqun@cs.wm.edu, and Dr. Janett Walters-Williams, Hampton University, janett.williams@hampton.edu.
Securing IoT Devices through Power Side Channel Auditing and Privacy Preserved Convolutional Neural Networks
Internet of Things (IoT) devices have become the new cybercrime intermediaries to process cyber attacks and deploy malicious contents. The reasons are two folds. First, the popularity of IoT devices has attracted cybercriminals to conduct large-scale cyber attacks. Second, the cybercriminals also take advantage of the innocence of IoT devices, compared to the dedicated hosts, to deploy cyber attacks and evade the IP blacklist-based detection. Further, some of the IoT devices, such as web cameras and routers, were known for their weak security protection. Although there have been indications of IoT devices misuse, identifying and understanding how such devices are abused are challenging, because IoT bot attacks are stealthy, IoT devices are diverse and resource limited, and desired IoT bot detections need to be non-invasive. As a result, existing techniques cannot be directly applied to capture IoT bots, because they require invasive devices upgrade or modification. Also, these techniques are typically limited to detecting homogeneous devices (e.g., just PCs). Therefore, we propose a novel scheme to exploit IoT devices’ power side channel information to identify the compromised IoT devices.Specifically, we propose a universal Smart Plug design that provides power for heterogeneous IoT devices while at the same time detect malicious bot behaviors through Convolutional Neural Networks (CNN). A LEAP framework is proposed to offload CNN computation from IoT devices to the cloud while at the same time ensuring data privacy of IoT devices.
Project Team: Dr. Gang Zhou, William and Mary, gzhou@cs.wm.edu, Dr. Chunsheng Xin, Old Dominion University, cxin@odu.edu, and Dr. Danella Zhao, Old Dominion University, dzhao@odu.edu.
Trust, Interoperability and Inclusion: A Framework for Creating Cyber-Trust in Connected Homes
Internet-of-Things (IoT) devices are a growing part of people’s lives, collecting and communicating everything from health information to data on appliance use in homes. Users may be aware of some data collection practices, but there are also hidden ways in which devices collect data. These issues can be exacerbated when people are on the spectrum (hearing, vision, physical/motor, autism) because devices may not adjust to accommodate these differences. Furthermore, these pools of data are susceptible to cyber attacks and misuse in ways that may not be readily apparent to users. There is a gap in trust between devices in spaces and the people who inhabit those spaces. Therefore, we want to create, implement, and test a cyber-trust framework (CTF) that considers elements such as manufacturer information; background and experience of users, focusing specifically on people on the spectrum; and content collection (disclosed and undisclosed). The CTF will be rooted in technical, empirical, and theoretical thrusts, and this research will contribute to CCI’s mission, as noted in the blueprint, of establishing Virginia as a global leader in secure cyberphysical systems and the digital economy.
Project Team: Dr. Stephanie Blackmon, William and Mary, sjblackmon@wm.edu, Dr. Saikou Diallo, Old Dominion University, sdiallo@odu.edu, and Dr. D.E. Wittkower, Old Dominion University, dwittkower@odu.edu.

COVA CCI is supported by the Commonwealth Cyber Initiative and funded through the Commonwealth of Virginia.
Contact: covacci@odu.edu
