Ashutosh Trivedi /cs/ en Holding tax software accountable /cs/2024/03/11/holding-tax-software-accountable <span>Holding tax software accountable</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-03-11T13:03:19-06:00" title="Monday, March 11, 2024 - 13:03">Mon, 03/11/2024 - 13:03</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/cs/sites/default/files/styles/focal_image_wide/public/article-thumbnail/ashutosh-trivedi.png?h=ce387900&amp;itok=xRx7u3Yo" width="1200" height="600" alt="Ashutosh Trivedi"> </div> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/cs/taxonomy/term/481" hreflang="en">Ashutosh Trivedi</a> <a href="/cs/taxonomy/term/439" hreflang="en">Research</a> </div> <a href="/cs/grace-wilson">Grace Wilson</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/cs/sites/default/files/styles/large_image_style/public/article-image/ashutosh-trivedi_0.png?itok=3BLpstg9" width="1500" height="1547" alt="Ashutosh Trivedi"> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><div class="ucb-box ucb-box-title-hidden ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-white"> <div class="ucb-box-inner"> <div class="ucb-box-title"></div> <div class="ucb-box-content"> <blockquote> Tax preparation software is critical for our society, but if the software is incorrect, people using it are responsible for accuracy-related penalties."<br> - Ashutosh Trivedi </blockquote> </div> </div> </div> <p>When you file your taxes, you are responsible for any errors, even those created by the software you trust to compute your tax return. Since over 93 percent of individual tax returns were filed electronically in 2023, many in the United States are vulnerable to bugs in these systems.</p> <p>This didn't sit right with Associate Professor of Computer Science Ashutosh Trivedi, and his work is getting positive attention from the IRS.</p> <p>"Tax preparation software is critical for our society, but if the software is incorrect, people using it are responsible for accuracy-related penalties" he said.&nbsp;</p> <p>This led Trivedi and fellow researchers to seek a way to verify the correctness of tax software, but how to do it?</p> <h2>No 'oracles'</h2> <p>"Tax software has what is commonly called an oracle problem. It means that you can think of the input, but you don't know what the ideal output is for that situation," he said.&nbsp;</p> <p>Unlike a simple calculator where you can add two numbers and have a guaranteed output, the U.S. tax system is highly complex and expressed in&nbsp;potentially ambiguous&nbsp;natural language.</p> <p>There are gaps and loopholes that leave more of the code up for interpretation than you might think.</p> <p>The United States tax code, notes included, is also over 9,000 pages. As the code changes every year, software can unintentionally miss new requirements.&nbsp;</p> <p>The researchers' solution? Legal precedent. In United States law, similar cases should have similar outcomes. This can be replicated in a software engineering approach called "metamorphic testing."</p> <h2>It's the little things</h2> <p>"In metamorphic testing, you present two inputs to the system that differ from each other so that the correct output of the program for these inputs must be in a certain predictable relationship," Trivedi said.&nbsp;</p> <p>For instance, one may not know someone's exact tax return, but they can expect that another individual, who has the same taxable characteristics except that their spouse is blind, must receive a higher standard deduction.</p> <p>Due to privacy concerns, there is no accessible dataset of taxpayer answers to forms, so it was necessary for Trivedi and his team to delve deep into the tax documentation and create fictional personas based on edge and corner cases. These variations could then be tested next to each other.</p> <p>By creating personas with very similar taxable characteristics, you can test whether people who have similar inputs are getting similar results, or if something has gone wrong.&nbsp;</p> <p>What they found was that there were real bugs in open-source software, especially when returns were close to zero dollars, or when a taxpayer was disabled. They then created simple, easy-to-follow flow charts explaining where the errors occurred. This test can be applied to any tax preparation software.&nbsp;</p> <h2>Law into logic</h2> <p>Trivedi sees the power of this approach as going beyond taxpayer software. What if interested parties could transform the natural language limitations of law into code upon which experiments could be conducted?</p> <p>"For the first time you have software that, if proven correct, could stand in for the text that the government releases," he said. Trivedi said he believes code would be easier to interrogate for fairness and discrimination than natural language, potentially increasing the impartiality of law.&nbsp;</p> <p>The project has intrigued the IRS, who have invited the team behind the research to present at the IRS-TPC Joint Research Conference on Tax Administration in June.</p></div> </div> </div> </div> </div> <div>Errors in tax software could keep you from your best return, or worse, leave you liable for inaccuracies. Now, computer science researchers at ¶¶ÒõÂÃÐÐÉä Boulder are working to ensure the correctness of tax software.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 11 Mar 2024 19:03:19 +0000 Anonymous 2434 at /cs Trivedi seeks to democratize artificial intelligence through CAREER award  /cs/2022/06/23/trivedi-seeks-democratize-artificial-intelligence-through-career-award <span>Trivedi seeks to democratize artificial intelligence through CAREER award&nbsp;</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2022-06-23T14:05:18-06:00" title="Thursday, June 23, 2022 - 14:05">Thu, 06/23/2022 - 14:05</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/cs/sites/default/files/styles/focal_image_wide/public/article-thumbnail/ashutosh-trivedi-photo.png?h=f3b0f4c5&amp;itok=rIk5zYkR" width="1200" height="600" alt="Ashutosh Trivedi"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/cs/taxonomy/term/465"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/cs/taxonomy/term/481" hreflang="en">Ashutosh Trivedi</a> <a href="/cs/taxonomy/term/482" hreflang="en">CAREER</a> <a href="/cs/taxonomy/term/483" hreflang="en">PVL</a> </div> <a href="/cs/grace-wilson">Grace Wilson</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p dir="ltr"><a href="https://astrivedi.github.io/www/index.html" rel="nofollow">Ashutosh Trivedi</a>, an assistant professor in the <a href="/cs/" rel="nofollow">Department of Computer Science at ¶¶ÒõÂÃÐÐÉä Boulder,</a> is working to democratize artificial intelligence by making machine learning more programmable, trustworthy and accessible to everyone.&nbsp;</p> <p dir="ltr">He has just been presented with a prestigious CAREER award from the National Science Foundation to do so. The award supports the research and educational activities of early career faculty members who have the potential to become leaders in their field. Trivedi's provides $600,000 over the next five years.&nbsp;<a href="/engineering/2022/06/26/college-engineering-celebrates-6-nsf-career-award-winners-2022" rel="nofollow">Six faculty members within the College of Engineering and Applied Science received CAREER Awards from the National Science Foundation in 2022.</a></p> <p>Trivedi will use the award to improve abstraction as an alternative to traditional neural networks, which have huge energy and data requirements, and to build our ability to trust and understand machine learning.</p> <p>Trivedi, who was born in India and raised during the country's intensive focus on computers in the 80s and 90s, knows how essential access is. Reminiscing about this pivotal moment in India's history, Trivedi said, "by having the power to talk to computers, we transformed not only our own lives, but those around us."&nbsp;</p> <p>If people everywhere aren't given access to artificial intelligence now, he said, it will remain confined to applications with high capital investment, rather than being a vehicle for widespread innovative problem-solving.</p> <p>"Computers can be little engines of creativity and they can co-create with us. Humans are not the only sources of beauty and ingenuity," Trivedi said.&nbsp;&nbsp;</p> <p dir="ltr">To understand the fundamental change Trivedi is pursuing in his research, we must first understand what machine learning looks like right now. This current moment, he said, is as transformational as the shift from computers that took up several rooms to those sitting in a wrist-watch today.&nbsp;</p> <p dir="ltr">Right now, many applications rely on neural networks and reinforcement learning.&nbsp;</p> <p dir="ltr">Neural networks are large computer programs that, given huge amounts of data, create working definitions for what they have been trained on. For example, after viewing a large number of cat images, the machine "learns" to see a grouping of pixels as a cat.&nbsp;</p> <p>But, Trivedi said, there's a problem with that. The program can't explain what a cat is beyond that grouping of pixels.&nbsp;</p> <p dir="ltr">“If you train something with a neural network, you do not know what has been learned. We cannot explain why something is a cat or not a cat," he said. The machine's learning process and reasoning is hidden from us.&nbsp;</p> <p>The high skill-floor and intense resource demand of these large neural networks can also keep machine learning away from passionate but under-resourced programmers who can’t afford the massive costs of creating and maintaining the networks.&nbsp;</p> <p dir="ltr">Reinforcement learning, especially when combined with neural networks, is a promising machine learning approach to problem solving if it could be made more trustworthy and capable of solving complex problems.</p> <p>Reinforcement learning is similar to training a dog, Trivedi said. By rewarding good behaviors over and over and chastising the bad ones, you can slowly train a dog to do many tricks, like shake hands or heel.</p> <p dir="ltr">But when and how should a reward be given? In computer science, this is a question programmers must answer each time they create a reinforcement learning application. If you reward at the wrong time, you could cause a program to learn the wrong thing, like mis-training a dog to bark when it sees food.&nbsp;</p> <p dir="ltr">A programmer could create a program that inadvertently damages a power grid or makes racially-biased decisions on who can access a home loan due to a bad internal reward system.&nbsp;</p> <p dir="ltr">Trivedi's CAREER proposal focuses on building tools for reinforcement learning that free the programmer from the burden of translating desired outcomes to specific rewards. Instead of using a gut-feeling for the reward, programmers would now have a rigorous, formal system to assist them that they can trust.</p> <p dir="ltr">But, even if rewards are given correctly, reinforcement learning traditionally doesn't work as well for the large, complicated problems machine learning has so much promise for.</p> <p dir="ltr">So Trivedi wants to be able to increase the scale of tasks that reinforcement learning can be used for by exploiting modularity – a design principle that emphasizes breaking apart an overall system into simpler, well-defined parts.&nbsp;</p> <p>Trivedi will use "recursive Markov decision processes" to describe the larger system as a collection of simpler systems, then build the overall solution to the complicated problem by combining the solutions to those simpler subtasks.&nbsp;</p> <p dir="ltr">These subtasks are less energy and time-intensive to solve and the resulting modularity promotes reusability and makes explanations easier.&nbsp;</p> <p dir="ltr">Through reinforcement learning that has a rigorous, formal system and that supports modularity, Trivedi's CAREER award opens a new path for complex artificial intelligence alongside neural networks, one that promises to be trustworthy, powerful and accessible to all.&nbsp;</p></div> </div> </div> </div> </div> <div>Trivedi is working to democratize artificial intelligence by making machine learning more programmable, trustworthy and accessible to everyone through a prestigious CAREER award from the National Science Foundation to do so. </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Thu, 23 Jun 2022 20:05:18 +0000 Anonymous 2103 at /cs NSF grants aim to improve security and safety of autonomous cars and systems /cs/2020/10/29/nsf-grants-aim-improve-security-and-safety-autonomous-cars-and-systems <span>NSF grants aim to improve security and safety of autonomous cars and systems</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2020-10-29T18:00:00-06:00" title="Thursday, October 29, 2020 - 18:00">Thu, 10/29/2020 - 18:00</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/cs/sites/default/files/styles/focal_image_wide/public/article-thumbnail/nsf_grant-image-hero_rev2.png?h=e6dc7cb9&amp;itok=890995Vt" width="1200" height="600" alt="Graphic showing self-driving car vulnerabilities "> </div> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/cs/taxonomy/term/481" hreflang="en">Ashutosh Trivedi</a> <a href="/cs/taxonomy/term/485" hreflang="en">Majid Zamani</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> </div> </div> </div> </div> <div>Researchers at ¶¶ÒõÂÃÐÐÉä Boulder, including Majid Zamani and Ashutosh Trivedi from computer science, are leading four new NSF-funded projects. </div> <script> window.location.href = `/engineering/2020/10/30/nsf-grants-aim-improve-security-and-safety-autonomous-cars-and-systems`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 30 Oct 2020 00:00:00 +0000 Anonymous 1677 at /cs