Skip to main content

Timothy Menzies

TM

Professor

Engineering Building II (EB2) 3298

Website

Area(s) of Expertise

Artificial Intelligence and Intelligent Agents
Data Sciences and Analytics
Health Care Information Technology
Information and Knowledge Management
Software Engineering and Programming Languages

Grants

Date: 01/01/23 - 12/31/24
Amount: $237,526.00
Funding Agencies: Laboratory for Analytic Sciences

We propose commissioning and assessing SUPERB for improving the efficiency of creating and maintaining ML models . Using SUPERB, we will address five problems. All our tools will be freely available under open source licenses for use by LAS personnel. Further, when convenient to LAS, we would run training sessions with SUPERB for LAS personnel. An important part of our methods is that SUPERB does not replace existing architectures, but augments them with small, but exceptionally useful, extensions. Hence SUPERB would be easily integrated into (e.g.) LAS��� current workflows.

Date: 10/01/19 - 9/30/24
Amount: $592,129.00
Funding Agencies: National Science Foundation (NSF)

Standard methods in empirical software engineering (SE) needs to be adapted before it can be safely deployed in other domains like computational science. But what adaption methods are useful/useless? Are they cost effective? Do they work effectively across multiple data sets? We have some preliminary results suggesting that the work for (a) defect prediction but can we also adapt other tasks such as (b) test case prioritization, (c) effort estimation, (d) learning to avoid spurious false negatives from static code analysis, etc. Why is this important? Well, building software is hard. Building good software is even harder when developers have not formally studied SE (i.e. as in the case of many developers of computational science software developers). How can we capture and maintain expertise about software development, then make that expertise more widely available?

Date: 10/01/19 - 9/30/24
Amount: $472,024.00
Funding Agencies: National Science Foundation (NSF)

Software analytics is a workflow that distills large amounts of low-value data into small chunks of very high-value data. A typical research paper in software analytics studies less than a few dozen projects. Such small samples can never be representative of something as diverse as software engineering. Perhaps it is time to stop making limited conclusions from tiny sets of software projects. To that end, we 3ill apply innovative transfer learning methods (based on very fast clustering and transfer learners based on very fast stream mining algorithms that use incremental hyperparameter optimizers) to the 10,000+ projects currently in Github.

Date: 10/01/19 - 9/30/23
Amount: $499,998.00
Funding Agencies: National Science Foundation (NSF)

Software practitioners need methods to prioritize security verification efforts through the development of practical vulnerability prediction models. The PIs of this project have conducted extensive research of software analytics and vulnerability prediction algorithms. Based on that work, we can assert that vulnerability predictors usually use old data mining technology, some of which dates back several decades. This proposal will explore numerous better ways to build vulnerability predictors.

Date: 07/01/17 - 6/30/23
Amount: $898,349.00
Funding Agencies: National Science Foundation (NSF)

This research proposes to advance the state of the art to holistic scalable autotuners, which tunes all levels of options for multiple optimization objectives at the same time. It will achieve this ambitious goal through the development of a set of novel techniques that efficiently handles the tremendous tuning space. These techniques take advantage of the synergies between all those options and goals by exploiting relevancy filtering (to quickly dispose of unhelpful options), locality of inference (that enables faster updates to out- dated tunings) and redundancy reduction (that reduces the search space for better tunings). This new autotuner will be a faster method for finding better tunings that satisfy more goals. To test this claim, this research will assess if this new tool can reduce the total computational resources required for effective SE data analytics by orders of magnitude.

Date: 01/26/21 - 7/25/21
Amount: $45,800.00
Funding Agencies: US Dept. of Defense (DOD)

Using model-based system engineering approach plus machine learning techniques, we will learn ise simulation models to determine the minimum parameter space for test case generation for off-nominal behavior. In this way, we will elarn the key parameters for requirements violations for complex black-box aerospace system models.

Date: 09/01/19 - 5/31/21
Amount: $985,485.00
Funding Agencies: National Science Foundation (NSF)

The overall goal of this Phase 1 Convergence Accelerator (C-Accel) proposal is to develop what we know to be the first public-facing AI platform that assists individual workers and small employers with upskilling and career changes in a labor market increasingly characterized by automation, technological disruption, and AI recruiting. It will address key challenges faced by employees and employers in occupations most impacted by AI with labor market research, credential gap diagnostics, and support for job search and retraining in AI recruiting. Focusing on manufacturing in Phase I, we will develop and build support for an occupation predicted to lose about 20% jobs to automation by 2026, namely, machine operation hiring mostly male non-college workers. Exploring retraining resources, job search strategies in AI recruiting, and reemployment opportunities in related occupations requiring complementary skills, we aim to assist manufacturing workers with upskilling and retraining while developing educational materials to help prepare young generations for future jobs. Our innovative solution will be scaled up to a wide range of occupations and retraining programs in Phase II.

Date: 05/01/20 - 12/31/20
Amount: $47,082.00
Funding Agencies: LexisNexis

This work builds time series over industrial scale text mining data, looking for predictors for significant business events.

Date: 01/01/19 - 12/31/20
Amount: $157,709.00
Funding Agencies: Laboratory for Analytic Sciences

LAS DO1 Menzies - 2.4 Analytics, AI and Machine Learning

Date: 01/30/20 - 5/28/20
Amount: $72,675.00
Funding Agencies: Defense Advanced Research Projects Agency (DARPA)

Deep Learning has problems (CPU cost and the incomprehensibility of its models) which can be solved by samples the rate of change in internal network weights as the deep learner streams over the data. Our LR2 algorithm detects novel inputs then repair existing models as appropriate. Using adaptive instance-based reasoning, LR2������������������s model-based sequential optimizer continually improves local models across the decision boundary. These models can report anomalies and also generate explanations about why particular examples lead fo one conclusion or another/ LR2 should also significantly reduce training times for Deep Learning


View all grants
  • Mining Software Repositories Foundational Contribution Award - 2017
  • Carol Miller Graduate Lecturer Award - 2016
  • IBM Faculty Award - 2016, 2017