Penn scientists teach computer programs how to teach programming

While the computers of today dwarf their predecessors when it comes to power and speed, the programming process hasn’t fundamentally changed in decades. It’s still a tedious practice of specifying step-by-step instructions, with an equally large effort devoted to ensuring those instructions produce the desired results.   

In an effort to change programming into a more intuitive process, Penn computer scientists and their colleagues are redefining what learning to program means.  

Their research could have an impact on teaching all manners of math- and logic-based subjects.

This effort is an offshoot of ExCAPE, a $10 million project that is part of the National Science Foundation’s Expeditions in Programming initiative. Led by Rajeev Alur, a professor of computer and information science in the School of Engineering and Applied Science, the team’s overarching objective is to create “automated program synthesis tools.” These tools would check whether human-supplied code operates correctly, or even supply suggestions for filling in the coding details of more generally defined goals. Essentially, the researchers envision the future of programming to be a collaborative effort between humans and computers.

Computers that have a big-picture understanding of what a program is supposed to be doing can give better feedback to programmers. Alur and his colleagues immediately realized that this could have a major impact on how programming is taught.       

“Existing tools can tell you if your answer is wrong,” Alur says. “But they don’t tell you how wrong you were.”

Alur enlisted colleagues—Penn Ph.D. student Loris D’Antoni; Mahesh Viswanathan and Dileep Kini from the University of Illinois at Urbana–Champaign (UIUC); Bjöern Hartmann of the University of California, Berkeley; and Microsoft researcher Sumit Gulwani—to develop a tutoring tool and began testing it with students in programing classes at Penn, UIUC and Reykjavik University in Iceland. Their first subject was a fundamental programming concept called autotmata theory, where students flow-chart diagrams of simple machines that represent the steps involved in the execution of a piece of code.  

A typical problem might ask a student to draw a machine that determines whether a user-supplied password meets certain requirements, such as being at least eight characters long and having at least two numerals. The tutoring tool goes beyond checking whether such a machine accepts only valid passwords and rejects invalid ones; if the student’s diagram produces faulty results, the tool can determine which parts of the machine are responsible and highlight them.  

Moreover, a higher order understanding of the goal allows the tutoring tool to determine if a wrong answer might be the result of the student misunderstanding the question. For example, if a student’s machine will only accept passwords with two numerals, rather than ones with at least two numerals, the system would then highlight the words “at least” in the question, rather than a part of the diagram.   

Alur and his colleagues found that students who used the tool came to the correct answers faster than their classmates who only got right-or-wrong feedback. Moreover, the former group spent more time doing optional practice problems.  

The researchers believe that any subject with questions and answers that can be phrased in mathematical terms could make use of their tutoring tool.

“I think it has great potential for high school algebra and geometry,” Alur says. “If you're doing problems sitting in your room, you really want feedback right away.”

Programming