Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests. https://stackoverflow.blog/2024/09/20/detecting-errors-in-ai-generated-code/
Melden Sie sich an, um einen Kommentar hinzuzufügen
Andere Beiträge in dieser Gruppe

The world has changed a lot since Stack Overflow started. It's time for our brand to change with it. https://stackoverflow.blog/2025/05/08/a-new-look-for-whats-next/

This post explores crucial lessons learned in the trenches of data licensing, drawing insights from Stack Overflow and the growing importance of socially responsible data practices in a changing inte

How can engineering teams move beyond traditional metrics like velocity to create real business impact? https://stackoverflow.blog/2025/05/08/moving-beyond-velocity-measuring-real-business-impact/

Community “management” at its core is supporting and enabling communities to manage themselves. https://stackoverflow.blog/2025/05/07/behind-the-scenes-community-management-at-stack-overflow/

Ryan welcomes Jeu George, cofounder and CEO of Orkes, to the show for a conversation about microservices orchestration. They talk through the evolution of microservices, the role of orchestration tool

You might already be familiar with the programming language best suited to building on blockchains. https://stackoverflow.blog/2025/05/05/the-consensus-is-typescript-is-the-easiest-way-to-build-on-bl