In this
research we develop state-of-the-art techniques in digital image processing to
build a prototype Automated Dental Identification System (ADIS). Given the
dental record of a subject, the goal of ADIS is to accurately and timely find a
short list of candidates that possess identical, or close, dental features to
those of the subject. The forensic expert, then, decides which of the few
candidates is the subject.
Evaluating the quality attributes of software
architectures has become a major research focus. We recognize that advances in
quantitative measurement are crucial to the vitality of the discipline of
software IV&V. We focus in this
project on defining and investigating metrics for domain architectures. We wish to define such metrics so as to
reflect relevant qualities of domain architectures, and to alert the software
architect to risks in the early stages of architectural design. We envision
that such metrics should be based on a theoretical background, primarily on
information theory, and they should be specific to the architectural level.
The recent advances in
object-oriented development methodologies and tools for software systems have
prompted their increasing use in developing mission and safety critical software
systems such as the International Space Station. New analysis and measurements techniques of
object-oriented artifacts, especially at the early stages of development, are
needed to support the IV&V process. The problem addressed in the project is
on the measurement and analysis of the real-time dynamic behavior of software
specification and design artifacts for applications modeled in UML. This
includes the verification of performance and timing behavior of real-time
activities, complexity, and risk assessment.
Product line engineering (PLE) is a specialized form of software reuse that has recently attracted the interest of software researchers and practitioners. PLE facilitates the production of similar products in the same domain through a composition of common domain artifacts. Software architecture is generally perceived as an effective artifact to control the evolution of product lines. It embodies earliest decisions for the product line and provides a framework within which reusable components can be developed. Evaluating the quality attributes of software architectures has become a major research focus. Based on the following well-known phrase ''A Science is as advanced as its instruments of measurement''), it is recognized that advances in quantitative measurement are crucial to the vitality of the discipline of product line engineering. The focus in this project is on defining and investigating metrics for domain architectures. The objective is to define such metrics so as to reflect relevant qualities of domain architectures, and to alert the software architect to risks in the early stages of architectural design. The main result of this project is the analysis of deterministic/statistical relationships between quantitative factors and computable metrics. A compiler will be developed in order to automatically calculate metrics from a formal description of the architecture. A validation study of the computable metrics will be conducted. The significance of this research lies in increasing our knowledge of how architectures are evaluated theoretically and quantitatively. The approaches developed will enable practitioners to quickly develop "no-surprises" software and accurately detect risks. The impact of this work is that society, as a whole will benefit from improving software safety and quality.
Funded
by ManTech, Inc., through the
An IETM is a digital
package of information required for diagnosis and maintenance of complex weapon
systems and both military and commercial equipment. The lack of interoperability within and among
IETM systems, has become a major challenge to the
alternative to the IETM database specification. We have
developed an UML-based Object oriented model, which provides the conceptual
foundation for assembling the Web-enabled alternative. The model presents a
component-based architectural design using Web technology such as Java and XML. We intend to implement this architecture in
two alternative demonstration systems to illustrate the robustness of the
architecture. The first is based on a file system and the second is based on a
Commercial Off The Shelf (COTS) database system.
Funded
by Averstar, Inc., through the
The objective of this project is to develop techniques to verify and
validate UML dynamic specifications. Verification techniques should be based on
quantitative metrics that can be evaluated systematically using existing tools
with little involvement of subjective measures from domain experts. Validation
techniques will be based on scenario-based testing and simulation of the
dynamic models.
We perceive that verification
and validation of UML dynamic specifications can be done using the recently
released Rational Rose Real-Time (RRRT). This tool developed by Rational
Software (www.rational.com), the originator of UML, in collaboration with ObjecTime Limited (www.ObjecTime.com), and the originator
of Real-Time Object Oriented Modeling (ROOM). The proposed work in this project
is focused on the development of UML based dynamic simulation models. Based on
these models, our research will investigate the development of methodologies
for conducting timing analysis, reliability and complexity analysis.
Center of Software Evolution: Software Architecture and
Design Patterns
Funded by EPSCoR
The current research Activities include:
·
Designing a framework
for closed-loop control systems using an existing set of design patterns. The
patterns used in the frame work have been tested and used in many other
applications. The utilization of the patterns and their interconnection to
produce a generic architecture for a feedback control system is the main target
of the proposed framework.
o
A
design framework for feedback control appliactions
has been developed
·
Design and development
of application specific frameworks for real-time software system and distributed system applications. Reusable
patterns are deployed in the proposed frameworks. Analysis and design
patterns shall be used in different phases of analysis and design of the
framework. An architecture based on layered structure for
distributed applications shall be developed. Investigation will be
conducted on using Design Patterns in suitable
layers in this layered architecture. A distributed health care
system will be used as an illustrative example. Health systems standards
will be used to guide the development of the framework.
o
We are investigating a developement methodology to built
framework using design patterns as their building blocks which we called Pattern Oriented Frameworks
·
Abstraction of new
design patterns used and produced in the real-time distributed application
frameworks. New design patterns shall be investigated when required in the
framework desgin. Currently we have documented:
o
A set of Finite State Machine patterns
o
A
pattern language of statecharts
·
Developing specification
for building frameworks for large scale systems using design patterns.
The objective is to define the interconnection relationship between patterns to
develop frameworks in an easy and fast manner.
Dynamic Load Balancing in Heterogeneous Network of
Multiprocessing and Multicomputer Systems (Cluster Computing with
Applications to Fluid Dynamics)
Funded by EPSCoR
Due to the tremendous decrease of coast/performance ratio of the
workstations and the increase of their capabilities, the idea of clustering was
born to gather their computational power to build low coast parallel machines.
Clustering as defined by Pfister [1] is "a
parallel or distributed system that consists of a collection of interconnected
whole computers, that is utilized as a single unified computing resource"
Another reason for clustering is increase the usage of these unused resources,
the investigation in Los Alamos Lab has also shown that a workstation is only
utilized 10 % of the time, and offload other resources that may be extensively
utilized. The key to a good clustering is a suit of scheduling algorithms that
may both minimize the execution time of a certain job and maximize the
utilization of the system resources by load balancing the cluster. This job
scheduler main task is to accept jobs submitted by the cluster users and
allocate the necessary resources to these jobs. In order to achieve goal the
scheduler must keep track of all cluster resources, their capabilities and
their utilization. It has as well to support both parallel and non parallel
jobs. And must be able to work on a heterogeneous cluster
composed of both small workstations and large parallel machines. Our
concern in this research is to develop new scheduling techniques for parallel
jobs in a network of heterogeneous systems composed of workstations and
parallel machines.
CEMR Laboratory for Computer-Based Instructional Technologies (LCIT)
CO-Funded by the
State of
Mineral Resources
Risk Assessment
and performability Analysis of Software Systems
Specifications
Funded by NASA
Goddard
Parallel
Algorithms for an Automated Fingerprint Image on System
Funded by NSF/EPSCoR
The Collaborative Medical Informatics Laboratory (CMIL).
NSF/EPSCoR Medical Imaging and Image Processing Research
Cluster (MIIPRS).
The proposed Medical Imaging and Image
Processing Research Cluster is intended to provide an infrastructure for
development of substantially stronger ties between research initiatives within
science and engineering (initially focusing on the research programs of the the Dept. of Electrical and Computer Engineering at WVU and
the College of Engineering at WVUIT) and the health science research
initiatives (initially focusing on the R.C. Byrd Health Sciences Center
research programs).