Recent Research Projects


 The International Center of Excellence in Software Engineering (ICESE)

Funded by: The Qatar National Research Fund (QNRF)

            The objective of this project is to establish an International Center of Excellence in Software Engineering (ICESE), a multifaceted center with active programs in research, outreach, and education. Its goal is to coordinate outreach and collaborative research that involve research faculty with expertise on software engineering and artificial intelligence from multiple universities and industry collaborators.

_____________________________________________________

 

The Automated Dental Identification System (ADIS)

Visit the ADIS website.

Funded by the US National Science Foundation (NSF) and the US National Institute of Justice (NIJ)

    In this research we develop state-of-the-art techniques in digital image processing to build a prototype Automated Dental Identification System (ADIS). Given the dental record of a subject, the goal of ADIS is to accurately and timely find a short list of candidates that possess identical, or close, dental features to those of the subject. The forensic expert, then, decides which of the few candidates is the subject.

______________________________________________________

Architectural Level Software Metrics

Funded by NASA and NSF

Evaluating the quality attributes of software architectures has become a major research focus. We recognize that advances in quantitative measurement are crucial to the vitality of the discipline of software IV&V.  We focus in this project on defining and investigating metrics for domain architectures.  We wish to define such metrics so as to reflect relevant qualities of domain architectures, and to alert the software architect to risks in the early stages of architectural design. We envision that such metrics should be based on a theoretical background, primarily on information theory, and they should be specific to the architectural level.

______________________________________________________

Verification and Validation of UML Dynamic Specifications

Funded by NASA

     The recent advances in object-oriented development methodologies and tools for software systems have prompted their increasing use in developing mission and safety critical software systems such as the International Space Station.  New analysis and measurements techniques of object-oriented artifacts, especially at the early stages of development, are needed to support the IV&V process. The problem addressed in the project is on the measurement and analysis of the real-time dynamic behavior of software specification and design artifacts for applications modeled in UML. This includes the verification of performance and timing behavior of real-time activities, complexity, and risk assessment.

 

Software Metrics For Product-Line Architectures

Funded by NSF Program  On INFORMATION TECHNOLOGY RESEARCH

Product line engineering (PLE) is a specialized form of software reuse that has recently attracted the interest of software researchers and practitioners.  PLE facilitates the production of similar products in the same domain through a composition of common domain artifacts. Software architecture is generally perceived as an effective artifact to control the evolution of product lines. It embodies earliest decisions for the product line and provides a framework within which reusable components can be developed. Evaluating the quality attributes of software architectures has become a major research focus. Based on the following well-known phrase ''A Science is as advanced as its instruments of measurement''), it is recognized that advances in quantitative measurement are crucial to the vitality of the discipline of product line engineering. The focus in this project is on defining and investigating metrics for domain architectures. The objective is to define such metrics so as to reflect relevant qualities of domain architectures, and to alert the software architect to risks in the early stages of architectural design. The main result of this project is the analysis of deterministic/statistical relationships between quantitative factors and computable metrics. A compiler will be developed in order to automatically calculate metrics from a formal description of the architecture. A validation study of the computable metrics will be conducted. The significance of this research lies in increasing our knowledge of how architectures are evaluated theoretically and quantitatively. The approaches developed will enable practitioners to quickly develop "no-surprises" software and accurately detect risks. The impact of this work is that society, as a whole will benefit from improving software safety and quality.

______________________________________________________

 

A Web-enabled COTS-supported Component-Based Design for the Interactive Electronic Technical Manual (IETM) Specification

Funded by ManTech, Inc., through the Software Engineering Research Center (SERC)

 

 

An IETM is a digital package of information required for diagnosis and maintenance of complex weapon systems and both military and commercial equipment.  The lack of interoperability within and among IETM systems, has become a major challenge to the U.S. DoD IETM community. The initial phase of user-level interoperability support has been undertaken by the development of a Web-based Joint IETM Architecture (JIA). Within this architecture, there is a need to develop a standardized Web-enabled

alternative to the IETM database specification. We have developed an UML-based Object oriented model, which provides the conceptual foundation for assembling the Web-enabled alternative. The model presents a component-based architectural design using Web technology such as Java and XML.  We intend to implement this architecture in two alternative demonstration systems to illustrate the robustness of the architecture. The first is based on a file system and the second is based on a Commercial Off The Shelf (COTS) database system.

 

 


 

 

 

 

Scenario-based Independent Verification and Validation of UML Specifications

 

Funded by Averstar, Inc., through the Software Engineering Research Center (SERC)

 

 

The objective of this project is to develop techniques to verify and validate UML dynamic specifications. Verification techniques should be based on quantitative metrics that can be evaluated systematically using existing tools with little involvement of subjective measures from domain experts. Validation techniques will be based on scenario-based testing and simulation of the dynamic models.

 

 We perceive that verification and validation of UML dynamic specifications can be done using the recently released Rational Rose Real-Time (RRRT). This tool developed by Rational Software (www.rational.com), the originator of UML, in collaboration with ObjecTime Limited (www.ObjecTime.com), and the originator of Real-Time Object Oriented Modeling (ROOM). The proposed work in this project is focused on the development of UML based dynamic simulation models. Based on these models, our research will investigate the development of methodologies for conducting timing analysis, reliability and complexity analysis.

 


 

Center of Software Evolution: Software Architecture and Design Patterns

Funded by EPSCoR

 

The current research Activities include:

·                   Designing a framework for closed-loop control systems using an existing set of design patterns. The patterns used in the frame work have been tested and used in many other applications. The utilization of the patterns and their interconnection to produce a generic architecture for a feedback control system is the main target of the proposed framework.

o       A design framework for feedback control appliactions has been developed

 

·                   Design and development of application specific frameworks for real-time software system and  distributed system applications.  Reusable patterns are deployed in the proposed frameworks.   Analysis and design patterns shall be used in different phases of analysis and design of the framework.   An architecture based on layered structure for distributed applications shall be developed.   Investigation will be conducted on using  Design Patterns in suitable layers in this layered architecture.    A distributed health care system will be used as an illustrative example.   Health systems standards will be used to guide the development of the framework.

o       We are investigating a developement methodology to built framework using design patterns as their building blocks which we called Pattern Oriented Frameworks

 

·                   Abstraction of new design patterns used and produced in the real-time distributed application frameworks.  New design patterns shall be investigated when required in the framework desgin. Currently we have documented:

o       A set of Finite State Machine patterns

o       A pattern language of statecharts

 

·                   Developing specification for building frameworks for large scale systems using design patterns.  The objective is to define the interconnection relationship between patterns to develop frameworks in an easy and fast manner.

 

 



Dynamic Load Balancing in Heterogeneous Network of Multiprocessing and Multicomputer Systems (Cluster Computing with Applications to Fluid Dynamics)

Funded by EPSCoR

Due to the tremendous decrease of coast/performance ratio of the workstations and the increase of their capabilities, the idea of clustering was born to gather their computational power to build low coast parallel machines. Clustering as defined by Pfister [1] is "a parallel or distributed system that consists of a collection of interconnected whole computers, that is utilized as a single unified computing resource" Another reason for clustering is increase the usage of these unused resources, the investigation in Los Alamos Lab has also shown that a workstation is only utilized 10 % of the time, and offload other resources that may be extensively utilized. The key to a good clustering is a suit of scheduling algorithms that may both minimize the execution time of a certain job and maximize the utilization of the system resources by load balancing the cluster. This job scheduler main task is to accept jobs submitted by the cluster users and allocate the necessary resources to these jobs. In order to achieve goal the scheduler must keep track of all cluster resources, their capabilities and their utilization. It has as well to support both parallel and non parallel jobs. And must be able to work on a heterogeneous cluster composed of both small workstations and large parallel machines. Our concern in this research is to develop new scheduling techniques for parallel jobs in a network of heterogeneous systems composed of workstations and parallel machines.


 

CEMR Laboratory for Computer-Based Instructional Technologies (LCIT)

CO-Funded by the State of West Virginia and the College of Engineering and

 Mineral Resources

  • a collaborative research facility of equipment, hardware, and software for collaborative investigation of advanced hardware, software, database, and delivery components for emerging and future computer-based educational modules/courses,
  • an advanced technologies acquisition and assessment laboratory to expedite general insertion of the rapidly evolving educational technologies into the university curriculum,
  • an advanced technologies acquisition and assessment laboratory to expedite general insertion of the rapidly evolving educational technologies into the university curriculum,
  • a development and verification laboratory providing technical assistance for deployment of computer-based educational modules/courses using a coherent framework by educators throughout WVU and WVUIT, and
  • a "showcase" facility to demonstrate application of computer-based education to other universities, colleges, K-12, and industry for 21st century learning environments.

Risk Assessment and performability Analysis of Software Systems Specifications

Funded by NASA Goddard

  • "Risk Assessment of Functional Specification of Software Systems Using Colored Petri Nets,"
    Proceedings of the International Symp. On Software Reliability Engineering (ISSRE'97), IEEE Comp. Soc.,November 1997.
    This paper presents an example of risk assessment in complex real-time software systems at the early stages of development. A heuristic risk assessment technique based on Colored Petri Nets (CPN) Models is used to classify software according to their relative importance in terms of such factors as severity and complexity. The methodology of this technique is presented in a companion paper in [1]. This technique is applied on the Earth Operation Commanding Center (EOC _COMMANING); a large component of NASA’s Earth Observing System (EOS) project. Two specifications of the system are considered: a sequential model and a pipeline model. Results of applying the above technique to both CPN-based models yield different complexity measures. The pipeline model shows clearly a higher risk factor than the sequential model. Whereas using traditional complexity measures, the risk factors were similar in both
    models. components with high risk factor which would require the development of effective fault tolerance mechanisms.
  • A Methodology for Risk Assessment and Performability Analysis of Large Scale Software Systems
    International Conferance on Engineering Mathematics and Physics. Cairo Egypt, Dec, 1997
    (PDF file,95k)
    This paper describes a methodology for modeling and analysis of large scale software specifications of concurrent real-time systems. Two types of analysis, namely, risk assessment and performability analysis are presented. Both types of analysis are based on simulations of Colored Petri Nets (CPN) software specification models. These CPN models are mapped from the software specifications originally developed using Computer-Aided Software Engineering (CASE) tools. Thus the methodology lends itself to a three step process. In the first step CASE based models are mapped to the CPN notation. The CPN models are completed for scenario based simulations in the second step. Finally in the third step the models are simulated for risk assessment and performability analysis. A model of a large industrial scale software
    specifications
    is presented to illustrate the usefulness of this approach. The model is based on a component of NASA’s Earth Observing System (EOS).

  • A Methodology For Risk Assessment of Functional Specification of Software Systems Using Colored Petri Nets
    International Symp. on Software Metrics, IEEE Computer Soc., Nov. 1997
    (Word Doc file,95k)
    This paper presents a methodology for risk assessment in complex real-time software systems at the early stages of development, namely the analysis/design phase. A heuristic risk assessment technique is described based on Colored Petri Nets (CPN) Models. The technique uses complexity metrics and severity measures in developing a heuristic risk factor from software functional specifications. The objective of risk assessment is to classify the software components according to their relative importance in terms of such factors as severity and complexity. Both traditional static and dynamic complexity measures are supported. Concurrency complexity, is presented as a new dynamic complexity metric. This metric measures the added dynamic complexity due to concurrency in the system. Severity analysis is conducted using failure
    mode and effect analysis (FMEA).

  • Performability Analysis of the Commanding Component of NASA’s Earth Observing System
    The 10th International Conf. on Parallel and Distributed Computing, New Orleans, Oct. 1997
    (PDF file,26k)
    The objective of this work is to develop methods and techniques for generating verification and analysis models from notations used for Parallel and Distributed Systems specifications. The resulting verification models can be subjected to extensive and exhaustive verification of the requirement specifications. This paper presents an application of the methodology developed by us to integrate a CASE environment based on SART (Structured Analysis with Real Time) notation and CPN (Coloured Petri Nets) based verification environment.

Parallel Algorithms for an Automated Fingerprint Image on System

Funded by NSF/EPSCoR

  • Parallel Algorithms for an Automated Fingerprint Image Comparison System
    International Symp. on Parallel and Distributed Processing (SPDP'96), IEEE Computer Soc., Oct. 1996
    (PDF file,220k)
    This paper addresses the problem of developing effcient parallel algorithms for the training procedure of a neural network based Fingerprint Image Comparison (FIC) system. The target architecture is assumed to be a coarse-grain distributed memory parallel architecture. Two types of parallelism: node parallelism and training set parallelism (TSP) are investigated. Theoretical analysis and experimental results show that node parallelism has low speedup and poor scalability, while TSP proves to have the best speedup performance. TSP, however, is amenable to a slow convergence rate. In order to reduce this effect, a modifed training set parallel algorithm using weighted contributions of synaptic connections is proposed. Experimental results show that this algorithm provides a fast convergence rate, while keeping the best speedup performance obtained.

  • Implementation of a Training Set Parallel Algorithm for an Automated Fingerprint Image Comparison
    System

    International Conf. on Parallel Processing(ICPP'96), IEEE Computer Soc., Aug. 1996
    (PDF file,116k)
    This paper addresses the problem of implementing a training set parallel algorithm (TSPA) for the training procedure of a neural network based Fingerprint Image Comparison (FIC) system. Experimental results on a 32 node CM-5 system show that TSPA achieves almost linear speedup performance. This parallel algorithm is applicable to ANN training in general and is not dependent on the ANN architecture.

 

The Collaborative Medical Informatics Laboratory (CMIL).


NSF/EPSCoR Medical Imaging and Image Processing Research Cluster (MIIPRS).

The proposed Medical Imaging and Image Processing Research Cluster is intended to provide an infrastructure for development of substantially stronger ties between research initiatives within science and engineering (initially focusing on the research programs of the the Dept. of Electrical and Computer Engineering at WVU and the College of Engineering at WVUIT) and the health science research initiatives (initially focusing on the R.C. Byrd Health Sciences Center research programs).





Description: Z:\www\goback.gif