Tepper Faculty Receive Block Center Grants to Study Future of Work and AI for Social Good
The gig economy, breast cancer diagnosis, online platforms and machine bias are among the topics that will be addressed by Tepper faculty research funded through grants from the Block Center for Technology and Society.
With a mission to address the future of work and application of artificial intelligence to promote social good, the Block Center was established in 2018 thanks to a $15 million gift from CMU alumnus and trustee Keith Block, COO of Salesforce, and his wife, Suzanne Kelley. The center has set out to support research projects that share the benefits of technological innovation.
Four of the 10 research proposals awarded grants in September 2019, which serve to further the center’s mission through interdisciplinary study, include Tepper faculty members.
Here’s a closer look at the four projects including Tepper faculty that received funding:
-
Securing the Gig: Labor Markets, Entrepreneurship and the Rise of the Platform Economy
Matthew Denes, Assistant Professor of Finance
with Spyridon Lagaras, Assistant Professor of Finance at the University of Pittsburgh, and Margarita Tsoutsoura, Associate Professor of Finance and John and Dyan Smith Professor of Management and Family Business at Cornell UniversityCan you describe your research project and how it relates to entrepreneurship and the platform economy?
The platform economy has considerably transformed U.S. labor markets and businesses. Companies such as Uber and Postmates have upended the markets that they operate in. Yet little is known about the effect of the gig economy on the labor market and entrepreneurial activity. Do the opportunities provided to workers in the gig economy reduce income volatility? Do platform-based jobs stabilize entrepreneurial activity by providing an additional source of income? What is the size and scope of the gig economy? Our project seeks to answer these questions.
What sets you and your team apart to be working on this research?
The gig economy and its importance has only recently been studied by researchers. My team and I are set apart in our previous work on entrepreneurship and experience working with large, administrative datasets. We are excited to bring our expertise to answer new and important questions.
-
Uncovering the Source of Machine Bias
Yan Huang, Assistant Professor of Business Technologies; BP Junior Faculty Chair AY 2019-2020
Param Vir Singh, Carnegie Bosch Professor of Business Technologies and Marketing
Duyu Chen, Ph.D. studentHow does your project promote the mission of the Block Center?
Our project aims to address the issue of machine bias and ensuring fairness when machine learning and AI are used in decision-making. The recent boom of AI technologies has raised hopes that by replacing human decision-makers with “objective” machines, one could eliminate the discrimination against members in the protected group. However, numerous research have cautioned that machine learning algorithms may inherit, or even amplify human bias when trained on biased or flawed data. We propose to address machine biases by examining the potential sources of these biases, developing a stylized economic model, investigating how different types of human bias are encoded in the data, and determining which types of human bias will be inherited, amplified, or removed by machines. With this understanding, we will then propose correction mechanisms before data has been input into the machine learning algorithm, so that the machine trained on the “corrected” data satisfied different fairness notions with minimal loss in accuracy.
Of the research goals for the Block Center, which does your project touch upon, and what do you hope to accomplish with your research?
Our project touches upon the goal of AI and analytics for social good. We examine the “source of machine bias” through an economic lens, and propose better ways to address machine bias. More broadly, our project contributes to designing socially responsible machines.
We are planning to explore the possibility of collaborating with Department of Human Services to apply our analysis and proposed solution to the context of child welfare services. We are hoping that we can use data in this context to identify potential human bias in the training data and implications for prediction, and test our proposed solution regarding how to obtain better training data and/or adjust existing training data prior to using it to train the algorithm.
-
Improving Breast Cancer Diagnosis with Interpretable Multimodal Machine Learning
Zachary Chase Lipton, Assistant Professor of Business Technologies
With Adam Perer, Assistant Research Professor, Human-Computer Interaction InstituteHow does your project promote the mission of the Block Center, which is to help determine how the benefits of technological innovation can be more widely shared?
Improving health care outcomes seems to be fundamentally aligned with the Block Center’s mission which includes supporting work that leverages advances in AI to achieve societal benefit. In our project, a collaboration with doctors at Magee Women’s Hospital (part of UPMC), we are focused on improving the quality radiologic assessment. If successful, our work could help to improve patient outcomes. This particular project focuses on the para-clinical setting (outside direct patient care). In particular, we are focusing on the training loop, leveraging advanced techniques in machine learning, computer vision, and visualization to help doctors to identify compelling training cases, to discover their blindspots, and to connect with peers who are especially suited to reading a specific cases.
Methodologically, the proposal builds off of some related work that I worked on as a scientist at Amazon in 2017. There we were focused on crowd-sourcing, where you send images to annotators. Even on comparatively straightforward tasks, like categorizing photos based on the most salient objects, we often get different answers from different annotators. Moreover, skill varies considerably from worker to worker, and different workers may have different strengths (say, which subset of images they perform best on). In that work we showed that we could insert machine learning not just to learn from these noisy labels but also to help characterize the strengths and weaknesses of each worker.
In this work we hope to model not only the true labels but the idiosyncrasies of each radiologist. For example, say that we can predict confidently that for a particular image, two specific radiologists are likely to disagree with each other about how to call the image. This might be a case worth surfacing for discussion during a training sessions, alongside the ground truth outcome. As next step we hope to improve these predictions by leveraging multi-modal structure (e.g., by combining images and health record data). Moreover, we hope to improve the training process by not only predicting when a radiologist will miscall and image but also extracting some information from the machine learning model that might help a radiologist to see what the model “sees” that they are missing.
Ultimately the calls made by radiologists in breast cancer screening drive important decisions. Should this patient be called back for diagnostic testing? Should that patient be told everything looks fine? Our goal is to improve this decisions-making process and thus to improve outcomes. We hope to identify more cancers while performing fewer unnecessary invasive diagnostic tests.
What was your reaction to receiving the grant?
This is a project I’m very excited about. This grant gives us a year runway to bring on a student, supporting them, clearing away obstacles to data access and preparation and to put ourselves in a position where this line of research can potentially blossom into a larger research project focused on improving breast cancer treatment and diagnosis and possibly improving radiology outcomes more broadly.
-
Gigs, Risks and Skills: How Online Labor Market Platforms Can Help to Improve Blue Collar Work in a Digital Economy
Erina Ytsma, Assistant Professor of Accounting
With Geoffrey Parker, Professor of Engineering at Dartmouth CollegeHow does your project promote the mission of the Block Center, which is to help determine how the benefits of technological innovation can be more widely shared?
We study the value of job security to low-skilled workers and the role that online labor market platforms can play in income risk mitigation. In order to do so, we are collaborating with an online labor market platform for blue-collar jobs on a randomized controlled experiment that allows us to estimate workers’ demand for income and job security, as well as the causal effects of pay level, income uncertainty and job risk on worker productivity, career paths and well-being. In doing so, the experiment allows us to evaluate the feasibility and value of providing risk mitigation in temporary work markets.
Of the research goals for the Block Center, which does your project touch upon, and what do you hope to accomplish with your research?
Our research project fits within the Future of Work research area of the Block Center. We hope to be able to derive policy recommendations about the need for provisions that increase income and job security for blue collar workers, as well as recommendations about the role that online labor market platforms can play in providing this.