Keston Exploratory Research Grants
2023
ISI Exploratory Research Award
Efficient and Trustworthy Distributed LLM Inference System for Personalized Smart Home
Assistants
John Paul Walters, Stephen Crago, Peter Beerel
The advent of LLM (Large Language Model) technology has made feasible the realization of personalized intelligent assistants capable of assisting with various aspects of daily life, including but not limited to shopping, scheduling, and entertainment. However, as such assistants handle increasing amounts of sensitive personal and private information, their security and efficiency are critical. This project will develop a secure distributed LLM inference system composed of local edge devices. We will address security through targeted redundancy and cryptographic processing across a subset of the most sensitive weights. Efficiency will be addressed through model compression within a pipeline-parallel distributed environment.
ISI Exploratory Research Award
Detecting Bias in the Law
Jonathan May, Jonathan Choi (USC Gould school of Law)
Biases in language models (LMs) can cause real-world financial and legal harms to targets of those biases. For instance, LMs are used to make investment decisions based on company news and earnings calls; if an investment is concerned with, say, a particular ethnic group, a biased model may misinterpret ambiguous speech in an unflattering light, leading to a decision not to invest. LMs have been used to make first pass hiring decisions and have specifically disfavored female applicants. We anticipate the use of LMs in the acceleration of legal decision-making, an especially important and high-risk domain. We believe the first step to combating bias is to identify it, ideally before actual harm is done. In this work we will develop techniques for systematically detecting bias in text, specifically in the legal subdomain.
Keston Research Award
Accelerating Fungal Genomic Insights Through Integration of LLMs with Advanced Sequencing Technology
Alexander Titus
The project focuses on enhancing biosecurity by quickly characterizing fungal species using state-of-the-art sequencing technology alongside Large Language Models like ChatGPT. This combination enables on-site, real-time genomic analysis, increasing the speed and accuracy of identification. The project involves integrating systems and developing APIs, training and testing AI with an extensive fungal genome database, and implementing and validating with diverse samples. Outcomes include a prototype system, novel genomic insights, and open-source tools, contributing significantly to mycological research and biosecurity.
Keston Research Award
ADMIN: AI-Generated Image Detection via Model Inversion
Mohamed Hussein, Amir Kalev
Generative models, particularly diffusion models, have revolutionized the field of text-to-image synthesis with state-of-the-art technologies, such as DALL-E and Midjourney. These innovative models excel at blending learned concepts to produce novel, realistic images at an unprecedented pace and ease, opening doors to myriad applications. However, this capability is a double-edged sword, as it also empowers malicious actors to create damaging content, spread misinformation, amplify societal biases, or violate copyrights. Addressing the urgent need to differentiate between original and AI-generated images, this project introduces a novel approach through model inversion. The proposed paradigm, applicable to any a-priori-unknown trained generative model, offers a robust solution without altering the image generation process. Our project provides a much-needed forensic tool in the era of generative AI, enhancing digital content's safety and integrity.
2022
ISI Exploratory Research Award
Multi-document Newsworthy Event Monitoring and Forecasting
Muhao Chen and Jonathan May
Journalists discover stories by reading lots of boring reports: government records, court cases, proposed bills, etc., to find the needles of events that are of publication interest from the haystack of information. For example, the FinCEN scandal was uncovered by manual reading of tens of thousands of financial irregularities reports; eventually extensive money laundering operations abetted by major US banks were discovered. While a compelling series of stories was written, it is unlikely the malefactors will ever receive justice, and regardless the victims of terrorism, kleptocracy, and Ponzi schemes will not be made whole, because the information could not be uncovered in a timely manner. This illustrates two large problems journalists face: identifying newsworthy events from an information overload, and doing so in a timely manner so that corruption and crime can be uncovered and stopped before the damage is done. The goal of our work is to accelerate the detection of newsworthiness by identifying important events and event sequences within a sea of timely text, video, audio, and images, and to forecast likely future events in order to focus on events that will post hoc be newsworthy, even if they are not noticed as such at time of occurrence. We also propose the creation of an event monitoring and forecasting system that will continuously monitor likely sources of news, such as city council minute meetings.
ISI Exploratory Research Award
Coherent and Commonsensical AI for Social Influence
Filip Ilievski and Gale Lucas (USC ICT)
The goal of this project is to develop dialogue AI models that can deploy social influence using coherent natural language supported by commonsense reasoning methods, and study the influence of the technology on end-users using subjective and objective metrics. Social influence, defined as the change in an individual’s thoughts, feelings, attitudes, or behaviors that results from an interaction with another individual, is critical for many applications, from politics to bargaining and therapy. Current AI dialogue agents, negotiation chatbots, and recommendation assistants have been designed to either complete specific tasks or to have open conversations with users. However, these agents are inadequate to effectively perform tasks involving negotiation, persuasion, and behavior change because of their inability to maintain a coherent and internally consistent dialogue with the users. Building on our prior work on narratives, we will investigate how to build coherent AI that expresses satisfactory levels of linguistic and pragmatic sophistication. Instead of directly generating a response, the method will learn to first imagine a scene knowledge graph that incorporates its acts/keywords into a coherent knowledge graph, and then learn to verbalize the graph into a fluent dialog response. AI that can function more efficiently in tasks that require social influence will ultimately improve learning and negotiation skills, and increase productivity, health, and/or well-being. The importance of socially influential dialogue systems has been recognized by the Army Research Laboratory, which has matched 50% of the funds awarded by ISI on this proposal.
ISI Exploratory Research Award
Will Fiction Trump Fact? On the Inevitability of Identity Inconsistency in Deepfake Videos
Mahyar Khayatkhoei and Wael Abd-Almageed
The rapid advancement of generative deep learning – fueled by faster and cheaper compute power and exploding data availability – has blurred the line between fact and fiction. In particular, deepfakes – videos in which the motion of a source video is transferred to a target identity such that it appears to say the words uttered by the source – are becoming increasingly hard to distinguish from real videos. Since existing deepfake detection methods themselves can be used directly as part of the objective in the deepfake generation process to improve the output, this rapid advancement in generative deep learning leads to a daunting question: will machines eventually be able to imitate any person without leaving any trace? Despite the alarming evidence, we conjecture that the answer is in fact negative, that is, deepfakes will always leave an inevitable trace. The goal of this research project is to first empirically and theoretically prove our conjecture, and second, to develop a system capable of detecting this particular trace. The proposed method relies on identifying inevitable internal inconsistencies within a video. In particular, our method will have the following advantages over existing methods for deepfake detection: 1) Generalization: our method will not require any deepfakes for training, thus being future proof and not specialized towards the existing deepfake methods; 2) Scalability: it will not rely on learning a reference representation for each identity with which to compare input videos during inference, rather it learns a frame-based representation that captures essential features for comparing any two identity, thus can scale to arbitrarily many new identities; 3) Efficiency: it will only operates on a pair of frames at any time, rather than on the whole video, thus requiring only a fraction of GPU memory; 4) Explainability: it will provide a direct and human-verifiable explanation in the form of two frames with mismatched identities.
Keston Research Award
A.I.-Driven Meal Recommendations To Meet Sociocultural and Nutritional Dietary Needs
Abigail Horn and Keith Burghardt
This work is using large-scale nutrition data and recommender systems to create algorithms to support clinicians in implementing meal prescription interventions that are inclusive of patients with diverse sociocultural backgrounds and food preferences. Poor diets are the leading cause of morbidity and mortality in the U.S., and meal prescriptions have been shown promise as an effective clinical tool to improve diet and treat diet-related diseases. However, existing mobile applications (“apps”) for meal planning suffer from a major limitation of being focused on Western normative diets, representing a major gap given that populations often targeted in meal prescription interventions are of racial/ethnic minorities with different cultural dietary preferences. We are developing a recommendation system algorithm, operationalized through a mobile app, which will recommend meals that meet the multi-criteria objectives of clinical requirements, dietary restrictions, cost limits, and sociocultural dietary pattern preferences. The algorithm and app will be customizable to and inclusive of socioculturally diverse audiences, highly scalable, and open-source. We will evaluate and validate the tool in a cohort of mostly Latino patients preparing for organ donation who need to meet specific health goals (e.g., BMI under target) to prepare for the intensive surgery/recovery processes.
Keston Research Award
360 Event Tracker
Steven Fincke
More than one in five Americans speak a language besides English at home, and many of these primarily consume media in their own language. Non-English-language American media outlets operate in relative isolation, with less oversight from competitors and public figures to impede the spread of misinformation and extremism. Conversely, important stories from overseas, as well as local immigrant communities, are frequently not brought to the attention of the general American public. Drawing on our extensive experience identifying and extracting information about events in texts in dozens of languages, we address these problems with a new way to aggregate and track information about events of interest, e.g. elections, disease outbreaks, natural disasters, etc. Event Tracker 360 will combine state-of-the-art cross-lingual information retrieval, extraction, and sentiment detection technology to track real-world events across diverse sources and languages, gathering---and intelligently aggregating---information and perspectives about that event and the context in which it has occurred or is occurring. We will also develop and deploy a graphic user interface to make output accessible to lay users. One panel will provide overviews of top stories from various media outlets, integrating machine translation for non-English sources, and our event-comparison view will bring together stories pertaining to the same real-world event across multiple sources and languages, cluster similar reports, and display similarities and differences between selected pairs of sources.
Keston Research Award
Early Detection and Management of Wound Infection Using Array Measurements and AI
Mohammad Rostami
Wound care is a major global healthcare challenge, with an estimated 67 million people suffering from wounds, costing approximately $25 billion to manage. Infections in wounds are common but challenging to detect early, leading to ineffective treatments, antibiotic resistance, and increased mortality rates. Current detection methods require frequent patient assessments and microbial culture analysis, causing strain on healthcare resources. A non-invasive colorimetric pH sensor has been developed to measure volatile organic compounds (VOCs) emitted by microorganisms in wounds. This sensor, used as a dressing, changes color according to the wound's condition, and artificial intelligence (AI) analyzes the color changes to determine infection status. The sensor shows promise for early infection detection and differentiating between bacterial strains. To establish the sensor's effectiveness, an AI-based deep learning model is proposed to analyze the sensor's data. Patients can use their smartphones to capture images of the dressing sensor, and AI will predict the wound's condition and potential future infections. This user-friendly system aims to save healthcare resources and provide early warnings to patients. The AI model will be trained using data from in vitro lab experiments, and a transformer architecture will be employed for its temporal data processing capabilities. The research aims to create a prototype system, with the potential for further funding to conduct in vivo animal and human studies.
2021
ISI Exploratory Research Award
Electronically Programming Thermal Emissions and Signatures from Warm Bodies
Jonathan Habif
At the ISI Laboratory for Quantum-Limited Information (QLIlab), we have developed a revolutionary electro-thermo-optic device able to electronically program and switch the rate at which (traditionally static) Planck radiation is emitted by warm bodies – allowing the ability to control heat expulsion from the surface of an object AND control the thermal signature of an object as it appears on a thermal camera. This early-stage technology development could be applicable to a multitude of application areas – including a new, unexplored communications signaling medium or a technique for engineering heat flow across the surface of an object. We have built a prototype unit that can be shared with engineers/entrepreneurs so that they can demonstrate new technologies that capitalize on this exotic new capability. When delivered to the hands of inventors and entrepreneurs this novel effect can lead to radical new ideas and technologies spanning the broad fields of communications, emergency preparedness, and heat management (for humans and other heat generating bodies). As an exemplar demonstration, we used the radiators to demonstrate a novel optical communications technique achieving 100 bps data rates over laboratory scale distances.
ISI Exploratory Research Award
Standing on the Shoulders of Giants: Understanding Creativity and Collaboration through Temporal Knowledge Graph Learning
Jay Pujara
How are great ideas born? Do they spring from individual geniuses, or large teams, or multidisciplinary collaborations? Many models have been proposed and debated in studies of innovation. However, analyses have been limited in the size and scope of the data they consider. We study models of collaboration at a vast scale of hundreds of millions of research publications, spanning both decades and disciplines to identify patterns of innovation. Formulating publication data as a temporal knowledge graph of authors, publications, institutions, and journals, we identify scientific communities and how they evolve over time and build forecasting models for new research ideas. Our project has created resources that allow exploration of the lineage of ideas, the enabling collaborative structures, and how individuals evolve over their careers.
Keston Research Award
IRIS: Integrated Retinal Functionality in Image Sensors
Akhilesh Jaiswal, Ajey Jacob, Gregory Schwartz (Northwestern University) and Maryam Parsa (George Mason University)
Today's computer vision exclusively relies on light intensity-based pixel data collected through state-of-the-art CMOS image sensors. However, in almost all cases, appropriate context for the pixels is missing (or are extremely vague) with respect to the ‘real-world events’ being captured by the sensor. Thus, the onus of processing is rather put on intelligent machine learning algorithms to pre-process, extract appropriate context, and make intelligent decisions based on light intensity-based pixel data. Unfortunately, such a vision pipeline leads to 1) complex machine learning algorithms designed to cater to image/video data without appropriate context 2) increases the time to decision, associated with the processing time of the machine learning algorithms 3) energy-hungry and slow access to pixel data being captured and generated by the CMOS image sensor. In contrast, the biological retina provides dozens of feature-selective computations such as object motion (as opposed to self-motion), the estimated trajectory of objects, object velocity, shape (elongated versus round), orientation and the presence of an approaching threat to the downstream visual cortex. The visual cortex, thus, receives precise feature data from the retina, that is heavily leveraged for low-latency processing and maneuverability in animals for ‘survival and escaping predators’.
Our proposal aims to develop a new family of retina-inspired sensors called – IRIS (Integrated Retinal functionality in CMOS Image Sensors), that would usher in new frontiers in vision-based decision making by generating highly specific motion and shape-based features, providing valuable context to pixels captured by the camera. Our proposal is based on the understanding of biological retinal computation that has made enormous strides over the past decade; recent studies have challenged the conventional wisdom of the eye being a passive filter and have instead portrayed the retina as a highly optimized pre-processing entity that generates specific targeted vision related data for the brain. However, much of this recent growth in the understanding of retinal computation has not yet been implemented in image sensor technologies, even though there is a long-standing symbiosis between vision science and technologies like cameras, displays and retinal implants. Thus, our proposal for a new family of IRIS sensors is a logical and technological successor to existing pixel-based sensors as well as to the emerging field of dynamic vision sensors.
Keston Research Award
Unbiased and Explainable Diagnosis of Melanoma
Michael Pazzani and Mohammad Rostami
Malignant melanoma is the fifth most common form of cancer diagnosed in the US. Early detection and treatment of melanoma are essential to reduce the mortality rate of this cancer. Additionally, despite being more prevalent in Caucasians, it is diagnosed at later stages and has a higher mortality rate in people of color. To increase the possibility of successful treatment, it is important to distinguish cancerous moles from other skin lesions at the early stages of melanoma. We use recent advances in AI to offer a solution for mitigating both challenges. We propose automatic differentiation of melanoma and other skin cancers from benign moles at early stages using images taken by ordinary smartphones to signal people at risk to visit a clinician at early stages. We train a deep neural network for this purpose. To mitigate bias, we generate synthetic images that represent minority sub-populations in a more balanced way compared to the existing datasets. We also provide user-centered explanations for melanoma image analysis to enable clinicians to interpret the results. To this end, we annotate the input image with regions that contribute to the network decision and offer an explanation why that region is important for diagnosis.
ISI Exploratory Research Award
Why Resilience In Innovation Is Necessary and How to Foster It
Kristina Lerman, Jay Pujara AND Keith Burghardt
Resilience helps people and organizations recover from shocks. The COVID-19 pandemic offers a unique ``natural experiment'' to study resilience. This is because pandemic’s impacts were highly uneven, with policies varying by country (e.g., China’s ``zero COVID'' policy vs Sweden’s more hands-off approach), state (e.g., strong vs weak COVID response), community (e.g., political climate and school closures), and even institution (e.g., remote work policy, vaccine mandates). We will study the heterogeneous impact of the pandemic on scientific innovation by analyzing newly-available bibliographic data sets covering millions of authors and papers. We will apply methods from the field of causality to understand how the disruptions affected research collaborations, and what factors promote resilient research.
2020
Keston Research Award
Effective Interventions of Misinformation in Online Social Networks
DANIEL BENJAMIN AND FRED MORSTATTER
Misinformation has led to many recent harms including mistrust of institutions, disregard for public health guidelines, decreased vaccination rates, a divided public, civil unrest, among others. The current fractured media landscape allows individuals to choose confirming over credible information. Misinformation can be debiased by identifying gaps in mental representations of the world (mental models) and prompt alerts to be vigilant about assessing information (Lewandowsky et al, 2012). We strive to develop interventions to mitigate the spread of misinformation by visualizing the social networks of hot button.
Our interventions describe how personal media consumption reaches a limited compendium of the media landscape. Social sampling theory describes that our misperceptions of others is explained by the sample of people we encounter (Galesic, Olsson, and Rieskamp, 2012), and we are more likely to link to similar people online (Kossinets & Watts, 2009). People are unaware of their own biases even when they can see them in others (Pronin, Lin, & Ross, 2002). Our intervention addresses this gap by making these biases explicit. We will pair social network analysis with traditional behavioral experiments. Our online experiments will test how individuals perceive their own networks and respond to various network visualizations.
Keston Research Award
SARFire: Rapid Wildfire Detection through Synthetic Aperture Radar
ANDREW RITTENBACH AND JP WALTERS
Over the last five years, the costs due to unchecked wildfires has dramatically increased. In California alone, millions of acres have burned and the cost due to damages has increased by an order of magnitude, up to billions of dollars per year. Without radically improved detection methods, these costs are expected to continue increasing. Ground based detection solutions using cameras or other types of sensors have proven minimally effective but suffer from limited field-of-view. An alternative detection approach is to use remote sensing, where measurements are taken by a satellite. However, today’s satellite-based approaches have several limitations: 1) their imaging modalities are limited to kilometer-scale resolution and are vulnerable to near total sensor blackout due to wild fire smoke, 2) all data must be downlinked to Earth for all image processing which adds hours between data collection and fire detection, and 3) the revisit time of today’s satellites is on the order of days which limits the ability of satellites to perform early detection and warning. The SARFire project seeks to address these limitations using a novel deep learning-based onboard Synthetic Aperture Radar (SAR) imaging technique and satellite constellations to provide near-constant overhead fire surveillance. To achieve this goal, we will develop a deep learning based that performs both SAR image formation as well as wildfire detection. Furthermore, to demonstrate that our approach is suitable for onboard SAR processing, we will port it to an embedded platform representative of compute processes that will be available on state-of-the-art SAR imaging satellites. We believe that when real-time SAR imagery is used in conjunction with data collected from other remote sensing satellites, we will be able to rapidly detect, localize, and monitor wildfires with resolution on the meter scale, improving the imaging resolution used for early wildfire detection by nearly 1000 times beyond what is currently used, while also substantially reducing detection time, greatly increasing the chance for early wildfire detection and thus limiting the damages caused by it.
ISI Research Award
FairPRS: Fairly Predicting Genetic Risks for Personalized Medicine
JOSE-LUIS AMBITE, GREG VER STEEG, KEITH BURGHART, KRISTINA LERMAN AND CHRIS GIGNOUX (UNIVERSITY OF COLORADO)
Personalized medicine seeks to improve disease prevention, diagnosis, and treatment by tailoring medical care to the individual. Uncovering the genetic basis of diseases and traits promises a better understanding of biological mechanisms and the design of drugs and interventions. A Polygenic Risk Score (PRS) combines the effects of many genetic variants into a score that indicates the risk on a disease for a given individual. However, genetic effects vary with ancestry. A PRS developed for one population often has low performance in another. Our goal is to develop novel methods for predicting genetic risks that generalize across populations, and thus can be broadly and fairly applied for personalized medicine.
ISI Research Award
Identifying Populations Susceptible to Anti-Science
KEITH BURGHARDT AND GORAN MURIC
Anti-science, including anti-vaccine, attitudes are present within a large and recently active minority. Vaccine hesitancy is partly responsible for a significant resurgence of Measles and a large reason why COVID-19 is an epidemic, especially within the United States. The rapid spread of conspiracy theories and polarization online is one reason behind these attitudes. In this proposal, we aim to understand who are these anti-vaccine users on Twitter, what is the language they use, and who are they likely to influence.
We are building a model that can identify anti-vaccine sentiment and provide an anti-vaccine score for each queried account. This score corresponds to the likelihood a user will express anti-vaccine attitudes in the future if they have not expressed these attitudes before. We use tweets, and explore various features, such as who users interact with, to determine user vulnerability. The model will be published on a public code repository for use by researchers and policy makers. Our work will provide a possibility for a rapid response to the recent uptick in anti-science sentiment by identifying users vulnerable to such messages. This tool, combined with properly targeted messaging and campaigns, has the potential to significantly enhance pro-vaccine efforts in the future.
ISI Research Award
AI2AI: Discovering and Assessing Vulnerability to AI-Generated Twin Identities
MOHAMED HUSSEIN AND WAEL ABD-ALMAGEED
Modern artificial intelligence methods have the ability to create pictures that match the quality of natural images, including synthesizing photo-realistic face images of people who are believed not to exist in real life. However, what if some of these AI-generated faces are virtually "identical twins" for real individuals?! Such fake twins can, intentionally or unintentionally, cause harm. They can be used in ways their real counterparts never consented for. Meanwhile, while modern AI models have been able to match or exceed human performance in multiple tasks, a face recognition model can be fooled into identifying an image of person A as person B if specific maliciously-crafted imperceptible perturbations are applied to an image of person A. This phenomenon underscores another type of AI-generated twins, adversarial twins, which are easier to generate than fake twins, and are harmful by construction. A lot of existing research is dedicated to generating more naturally looking fake or adversarial twins. However, to our knowledge, there has been little or no prior focus on discovering and assessing the vulnerability of individuals to the threats posed by them. AI Investigating AI (AI2AI)'s objective is to shed light on and ignite research efforts to cover this gap. AI2AI's goal is to help communities and law enforcement agencies discover and assess the vulnerability of individuals to fake and adversarial twins.
ISI Research Award
Bio-PICS: Bio-optical Point of care Intelligent COVID-19 Sensor
AJEY JACOB, AKHILESH JAISWAL AND NEHA NANDA (KECK SCHOOL OF MEDICINE)
Covid 19 disease has changed the human lifestyle over the last two years. It has already claimed millions of lives and trillions of dollars in losses [United Nations report 2020] worldwide. Yet, despite the unprecedented vaccination, the variants of the virus are still spreading. Therefore, early detection is the best prevention approach to reduce the spread of the virus. Currently, the available detection methods are time-consuming, expensive, and require expert intervention. Thus, there is an unmet and urgent need motivated by health and economic concerns to develop rapid, cheap, easy to use, point-of-care (POC) Covid-19 testing.
To this end, we are developing a novel selectively-sensing bio-photonics microfluidic optical ring resonator-based integrated chip architecture with an on-chip spectrometer consisting of coupled ring resonator filters and integrated photodetector arrays. This intensity-based sensing scheme shall provide a spectral accuracy of less than five picometers and is better than the reported state-of-the-art intensity detection schemes. Thus, the integrated device facilitates quantitative detection of the ultralow concentration of virus load (picograms per milliliter) without using expensive external spectral measurements. In addition, the integrated chip architecture design also reduces fabrication-induced performance variation and thermal sensitivity. Moreover, CMOS compatibility of the components used in the sensing circuit facilitates high volume manufacturing and lower cost that promises commercial success.
ISI Research Award
3D Facial Muscle Screening Tool For Early Diagnosis Of Parkinson Disease
HENGAMEH MIRZAALIAN AND WAEL ABD-ALMAGEED
Parkinson disease (PD) is one of the most common neurodegenerative movement disorders. It has been reported that approximately 1.2 million people in the United States will be affected by PD by the year 2030. PD causes slight shaking or tremor in fingers, slow handwriting, trouble on walking, and losing facial expression abilities. Early diagnosis of PD might be challenging in the presence of subtle movement alterations, whereas accurate and early diagnosis of it is crucial so that patients can receive proper treatment and advice. It has been shown that early PD detection might delay or even stop spreading the neurodegenerative process to other central nervous system regions. To the best of our knowledge, existing diagnostic tools to screen facial expression of PD patients rely on a limited number of 2D facial landmarks (up to 64). Since more information can be derived from 3D images, our goal in the proposed effort is to develop a facial expression screening tool over a fine-grained 3D mesh of the face. We compute a 3D mesh per frame of the captured video. Then, the series of the reconstructed 3D meshes are analyzed and quantified to study and evaluate spasticity and rigidity of facial muscles of PD patients.
ISI Research Award
Learning Fair AI Models Across Distinct Domains
MOHAMMAD ROSTAMI AND ARAM GALSTYAN
As societies are becoming increasingly reliant on AI for automatic decision-making across a wide range of applications, concerns about bias and fairness in AI are growing in parallel. Fairness in AI is not merely an ethical issue because bias undermines efficiency and productivity in the labor market. It has been demonstrated that “at least a quarter of the growth in the U.S. GDP between 1960 and 2010 is the result of greater gender and racial balance in the workplace”. A common approach for studying fairness is to investigate whether model decisions are related to sensitive attributes, such as gender or race. Since this is a relatively new research area, current works simply focus on debiasing AI models for a single domain. However, an initially fair model trained for one domain may be used in many other domains during execution. This means that even if we can train a fair model in a source domain, there is no guarantee that it will generalize fairly to target domains or when drifts in the input distribution occur during testing. We are addressing this challenge within a domain adaptation formulation. Our goal is to adapt a pretrained fair model to generalize well in a target domain fairly using solely unlabeled target domain datasets. Instead of starting training again with a new unbiased dataset, we are trying to use the knowledge that is gained during the original debiasing to preserve the fairness of the model in the new domain using unannotated target domain data. Our goal is to test the effectiveness of the algorithm that we are developing through testing on real-world benchmark datasets.
2019
Keston Research Award
Fighting Misinformation: An Internet System for Detecting Fake Face Videos
WAEL ABDALMAGEED AND IACOPO MASI
The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a web service offering a new way for assessing if a face video has been manipulated. The system employs an AI-based engine for deefake detection following our current research direction on video-based face manipulation detection [A, B]. The research direction paves the way for achieving scalable, person-agnostic deepfake detection in the wild.
The Deefake Detection Web Service allows the user to upload a short video. The video will be processed in background through our deefake detection engine. The user will be then notified, being able to review the detection output superimposed over the original video. The Deefake Detection Web Service keeps track of a history of previously processed videos so that they can be easily inspected if need be. It also offers a user management system allowing each user to privately inspect her/his own videos.
Technical video presentation: https://www.youtube.com/watch?v=X3N8QjV15d8&feature=youtu.be
Quick Demo: https://www.youtube.com/watch?v=RspKj9DtM9U
Related papers:
[A] Ekraam Sabir, Jiaxin Cheng, Ayush Jaiswal, Wael AbdAlmageed, Iacopo Masi, Prem Natarajan, "Recurrent Convolutional Strategies for Face Manipulation Detection in Videos", CVPR 2019 Workshop on Media Forensics
[B] Iacopo Masi, Aditya Killekar, Royston Marian Mascarenhas, Shenoy Pratik Gurudatt, Wael AbdAlmgaeed, "Two-branch Recurrent Network for Isolating Deepfakes in Videos", ECCV 2020
ISI Research Award
Automating Programmability of Hybrid Digital-Analog Hardware for Stochastic Cell Simulation in Biological Systems
ANDREW RITTENBACH, PRIYATAM CHILIKI, DEV SHENOY
Biological system modeling has become increasingly important over the past decade as it enables biologists to both quantitatively explain observations made in a laboratory environment and make predictions about how biological processes respond, given certain input. An example model consists of a set of biochemical reactions that feed into and interact with each other over time. One of the ‘holy grails’ of biological system modeling is a fully functional model of the entire human cell. Such a model could potentially enable development of personalized medicine, for various treatments, using an individual’s DNA as input to the cell model. However, one of the current bottlenecks in achieving this goal is development of a platform that is capable of modeling all of the individual processes that go on within a cell, simultaneously. Today, biological systems are modeled in software. Although this is acceptable for small-scale models of individual biological pathways, it is not viable for large-scale gene-protein networks. To this end, the ISI team investigated the viability of an alternative approach: a Cytomorphic computing platform pioneered by Prof. Rahul Sarpeshkar, which consists of programmable hybrid analog/digital circuitry designed specifically to model biochemical reactions. One challenge introduced with this approach, however, is determining how to configure the analog circuit parameters, such as current source amperage, to accurately model the reactions. In this project, the ISI team developed a reinforcement learning-based approach that was used to automatically configure a Simulink based model of a Cytomorphic circuit. Results showed, that after configuration, a biochemical reaction simulated using the Cytomorphic circuit-based simulation data matched closely with simulation data generated by COPASI, a standard software used to simulate biochemical processes.
ISI Research Award
Translators for Asylum Seekers at the Border
JONATHAN MAY
We are building domain-focused universal language translator tools to enable asylum applicants on the southern border of the US to communicate with immigration lawyers in order to prepare for credible fear interviews that can mean the difference between life and death. Preparation with a lawyer increases the chances of a successful asylum application from 13% to 74%. Unfortunately, many applicants speak languages such as Luganda, Mixtec, Mam, or Kanjobal, that are not available on commercial services like Google Translate, due to the extremely low data resources available for training models. Additionally, commercial translation models are not well-suited for translating credible fear narratives. We will use our expertise in low-resource translation to boost data resources with backtranslation, novel sentence generation, and related language transfer, and will leverage the USC Shoah Foundation's collection of genocide survivor testimony to adapt our models to handle this chilling domain.
ISI Research Award
Advancing No-Resource Languages
JOEL MATHEW AND ULF HERMJAKOB
Among the world's languages, about 100 are rich in written resources (e.g. English and Hindi), with large corpora of digitized text, translations from and to other languages, and dictionaries. Some 1000 additional languages are considered low-resource (e.g. Uyghur and Odia), whereas the remaining 6000 languages (e.g., Gaddi and Reli) have no or hardly any written resources. In this project, ISI colleagues Dr. Ulf Hermjakob and Joel Mathew will build a library of computational linguistic tools to support building dictionaries, translations, and building a substantial initial text corpus for no-resource languages. Such resources are critical for developing literacy, translating existing texts such as the Bible, encouraging the creation of original content, as well as language documentation and preservation. From a computer science perspective, the Bible with its currently 698 full translations is a massively parallel corpus that will greatly facilitate useful new tools for no-resource languages, such as automatically (1) identifying likely spelling variations, (2) identifying multi-word expressions, (3) identifying ambiguous words and clustering instances of these words in the source language, (4) identifying names to be transliterated, and (5) morphological processing of inflectionally related words, with automatic translation of such related words.
2018
An Artificial Intelligence-Based Mobile Screening Tool: Fetal Programming in Congenital Adrenal Hyperplasia
WAEL ABD-ALMAGEED AND MIMI KIM
Artificial intelligence methods will be used to investigate facial morphology in children with fetal programming due to prenatal hormone exposure. Studies will target children with congenital adrenal hyperplasia (CAH) as a natural human model for excess prenatal testosterone. Classical CAH is caused by a 21-hydroxylase deficiency, affecting 1 in 15,000 with fetal hyperandrogenism due to overproduction of adrenal androgens from week 7 of fetal life. This prenatal hormone exposure represents a significant change to the intrauterine environment during early human development that can adversely program the CAH fetus for postnatal disease. A prototype mobile imagining platform will be designed and built that enables the collection of large-scale facial images in children’s clinics, without relying on expensive 3D imaging systems. Further, an artificial intelligence-based 2D-to-3D facial processing pipeline will acquire images of the face in healthy controls and CAH youth and compare facial dysmorphism scoring between CAH patients and those that are unaffected.
Satbotics Control: How to Merge Biologically Inspired Spacecraft Together
DAVID BARNHART
This project will provide support to multiple graduate students to develop a new computational architecture that can enable independent satellites or spacecraft to physically and virtually “aggregate” on orbit. This is a completely new methodology and translates from monolithic to cellular in how space systems are created in the future. The computational architecture is intended to allow seamless merging of sensors/actuators/payloads as “resources” that can then be shared autonomously with all other “cells” to enable greater overall performance and capability on orbit than a single large platform can provide. The basics of this new architecture will be demonstrated on an internal 3-DOF air-bearing testbed, using independent floatbots that simulate independent spacecraft.
A Betavoltaic-Powered Transmitter for Continuous Glucose Monitors
ALEXEI COLIN
The Glutex project aims to develop a long-lived, low-maintenance continuous glucose monitor (CGM) for diabetes patients. A CGM is a wearable device that reports blood glucose levels to the patient. Existing CGMs require the patient to recharge batteries every few days and replace the device semiannually. Glutex eliminates this maintenance by replacing the battery with a betavoltaic energy harvester that lasts up to a decade. Glutex pioneers a circuit that accumulates the small trickle of energy from the harvester and releases it in bursts to power the sensor. A successful prototype opens the path to applying betavoltaic power sources in wearable and implantable medical devices.
FLEX SYNapses for Smart Wearable Electronics and Skin-Attachable Biosensing Devices
IVAN SANCHEZ ESQUEDA
Synaptic transistors on flexible and stretchable substrates can enable the implementation of artificial neural networks and learning algorithms when attached to skin sensors for in situ processing and classification of biological signals collected from wearable devices. They can also enable us to mimic the functions of sensory nerves and construct bioelectronic reflex arcs to actuate electro-mechanical devices. This technology has applications for electrophysiology and medical diagnosis, fitness and activity tracking devices, prosthetics, robotics, etc.
Discovery and Dismantling of Human Trafficking Networks
MAYANK KEJRIWAL AND PEDRO SZEKELY
Human trafficking is a form of modern-day slavery with a significant footprint—even here in the United States. Computational tools and methods, including network analysis and machine learning, can help in data-driven mapping of networks of illicit sex providers, many of whom might be victims of trafficking that is attributable to illicit advertisements posted over the Internet. Researchers are currently working to discover and dismantle such networks, especially for possible underage victims. This effort involves a collaboration with both law enforcement and independent consultations with domain experts in the social sciences.
2017
Understanding Internet Outages
JOHN HEIDEMANN
In past years our research has led to sophisticated tools for detecting Internet Outages - transient failures of the Internet caused by natural or man-made events. Our goal for this Keston effort was to present information about these outages in a natural and approachable way, meaningful to first responders and the general public.
The result of our work was the creation of a new website at https://outage.ant.isi.edu/ that supports viewing the Internet outage data that we collect. Our website makes exploring outage data more accessible to researchers and the public by interpreting terabytes of collected Internet outage data, and making the interpreted information visible on a world map.
Our website supports browsing more than two years of outage data, organized by geography and time. The map is a google-maps-style world map, with circles on it at even intervals (every 0.5 to 2 degrees of latitude and longitude, depending on the zoom level). Circle sizes show how many /24 network blocks are out in that location; circle colors show the percentage of outages, from blue (only a few percent) to red (approaching 100%). The raw data underlying this website is available on request at https://ant.isi.edu/datasets/outage/index.html).
Our Internet outages website was developed by a collaborative team of ISI and USC researchers. In addition to the Keston grant, this research has also been funded by government agencies such as the Department of Homeland Security.
The PipeFish Project
WEI-MIN SHEN
The goal of this research is to develop and test an inexpensive autonomous robot that uses sensors to collect data in underground water pipes—thereby enabling us to assess conditions and detect problems. The project is collaborating with the Los Angeles Department of Water and Power (LADWP).
Far beneath the streets and sidewalks of Los Angeles lies a rarely seen subterranean world—a labyrinthine network of underground pipes extending 7,200 miles across the city, carrying drinking water to more than four million people every day. Like many cities in the U. S., Los Angeles is facing a looming crisis over its aging water infrastructure, and fixing it will be a monumental and expensive task.
At least two-thirds of the city’s underground water pipes are more than 60 years old, and most will reach the end of their useful lives within 15 years. This can cause a host of problems—from burst pipes and loss of water service to road closures, sinkholes, and even potential water contamination.
The PipeFish robot is intended to greatly simplify and facilitate the repair and upgrading of this infrastructure. A PipeFish enters the water system through existing fire hydrants and follows the path of the pipeline to its final destination, where it is “caught” by a net. The footage captured is then uploaded and analyzed, providing operators with a 360-degree virtual reality interior view, without ever setting foot inside the pipe. Signs of damage inside the pipe, such as cracks, high corrosion or rust are indicators that can help authorities prioritize repairs, without expensive and disruptive excavation or water service interruptions.
PipeFish features a 360-degree camera, lights, sensors, and navigation technology, controlled by an onboard computer. It measures 20 inches to 30 inches in length and 3 inches to 6 inches in diameter. Constructed of plastic and other rigid materials, PipeFish is designed to be fully autonomous and untethered, so it can freely explore the complex network of underground water pipes. PipeFish is also a modularized robot; i.e., multiple PipeFish robots can form a chain to handle the complex twists and turns in an underground water network.
In 2017, Shen and his team conducted “dry tests” in pipes at the Los Angeles Department of Water and Power’s Sylmar West Facility in the San Fernando Valley. In 2018, the team plans to add water to the system and test the robot in different pipes of various diameters under the streets of Los Angeles.
PipeFish could be equipped with additional sensors to collect more information—including water flow rate, gas space, illegally dumped chemicals and flammable materials. Ultimately, our team hopes that a “school” of PipeFish robots can be programmed to quickly and inexpensively traverse specific paths.