O'Really?

February 13, 2023

Join us to discuss code comprehension on Monday 6th March at 2pm GMT

Filed under: sigcse — Duncan Hull @ 8:29 am
Tags: , ,
CC licensed puzzle image via flaticon.com

It’s all very well getting an AI to write your code for you but reading code and writing code is not the same as understanding code. So what is going on in novices brains when they learn to actually understand the code they are reading and writing? Join us on Monday 6th March at 2pm GMT to discuss a paper by Quintin Cutts and Maria Kallia from the University of Glasgow on this very topic [1], from the abstract:

An approach to code comprehension in an introductory programming class is presented, drawing on the Text Surface, Functional and Machine aspects of Schulte’s Block Model, and emphasising programming as a modelling activity involving problem and machine domains. To visually connect the domains and a program, a key diagram conceptualising the three aspects lies at the approach’s heart, alongside instructional exposition and exercises, which are all presented. Students find the approach challenging initially, but most recognise its value later, and identify, unexpectedly, the value of the approach for problem decomposition, planning and coding.

We’ll be joined by one of the co-authors (Quintin Cutts), who’ll give us a lightning talk summary of the paper to kick-off our journal club discussion.

All welcome, as usual we’ll be meeting on zoom, details at sigcse.cs.manchester.ac.uk/join-us

References

  1. Quintin Cutts and Maria Kallia (2023) Introducing Modelling and Code Comprehension from the First Days of an Introductory Programming Class in CEP ’23: Proceedings of 7th Conference on Computing Education Practice Pages 21–24 DOI:10.1145/3573260.3573266

July 4, 2022

Join us to discuss the implications of the Open AI codex on introductory programming Monday 4th July at 2pm BST

Automatic code generators have been with us a while, but how do modern AI powered bots perform on introductory programming assignments? Join us to discuss the implications of the OpenAI Codex on introductory programming courses on Monday 4th July at 2pm BST. We’ll be discussing a paper by James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly and James Prather [1] for our monthly SIGCSE journal club meetup on zoom. Here is the abstract:

Recent advances in artificial intelligence have been driven by an exponential growth in digitised data. Natural language processing, in particular, has been transformed by machine learning models such as OpenAI’s GPT-3 which generates human-like text so realistic that its developers have warned of the dangers of its misuse. In recent months OpenAI released Codex, a new deep learning model trained on Python code from more than 50 million GitHub repositories. Provided with a natural language description of a programming problem as input, Codex generates solution code as output. It can also explain (in English) input code, translate code between programming languages, and more. In this work, we explore how Codex performs on typical introductory programming problems. We report its performance on real questions taken from introductory programming exams and compare it to results from students who took these same exams under normal conditions, demonstrating that Codex outscores most students. We then explore how Codex handles subtle variations in problem wording using several published variants of the well-known “Rainfall Problem” along with one unpublished variant we have used in our teaching. We find the model passes many test cases for all variants. We also explore how much variation there is in the Codex generated solutions, observing that an identical input prompt frequently leads to very different solutions in terms of algorithmic approach and code length. Finally, we discuss the implications that such technology will have for computing education as it continues to evolve, including both challenges and opportunities. (see accompanying slides)

All welcome, details at sigcse.cs.manchester.ac.uk/join-us. Thanks to Jim Paterson at Glasgow Caledonian University for nominating this months paper.

References

  1. James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, James Prather (2022) The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming ACE ’22: Australasian Computing Education Conference Pages 10–19 DOI:10.1145/3511861.3511863

December 17, 2008

Happy Christmas Lectures 2008

Machines that learn by Kaustav BhattacharyaOne of the most important Christmas traditions in Europe, aside from drinking too much, excessive eating and generally conspicuous over-consumption, are the Royal Institution Christmas lectures. This year, they are being given by Professor Christopher Bishop (pictured right), Chief Scientist at Microsoft Research and are on the subject of the Quest for the Ultimate Computer. This hi-tech trek includes subjects such as machine learning, microchip design, artificial intelligence and Web technology. Here is the blurb from the one of the lectures to give you a flavour:

“Computers are extraordinary machines, able to perform feats of arithmetic that far exceed the capabilities of any human. They can store a huge quantity of data, and recall it perfectly in the blink of an eye. They can even beat the world champion at chess. So why do computers struggle to solve apparently simple tasks such as understanding speech, or translating text between languages? Why is a 3 year old toddler better at recognising everyday objects than the world’s most powerful supercomputer? In the last of this year’s Christmas Lectures, Chris Bishop will look at one of the great frontiers of computer science. We’ll see how some of the toughest computational problems are now being tackled by giving computers the ability to learn solutions for themselves, in much the same way as people learn by example. This has led to impressive progress with problems such as recognising handwriting and finding information on the web. But we are only beginning to explore the power of computation, and there are many challenges ahead in our quest for the ultimate computer.”

Broadcast on Channel 5 (starting Monday 29th December, consult your UK TV guide for details), these lectures are aimed at children, but can be enjoyed by kids of all ages (including grown ups). The lectures will also be available as a webcast from rigb.org and probably youtube as well. Whatever you’re doing over the coming holidays have a very happy Christmas, pagan solstice festival, winterval. Wherever you are, don’t forget to enjoy an intellectually nourishing side-portion of Computer Science with your festive feasting!

References

  1. http://www.rigb.org/christmaslectures08/
  2. Watch this: Royal Institution Christmas Lectures 2008, The Guardian 2008-12-29
  3. Review of Last Night’s TV: Christmas Lectures, The Independent 2008-12-30
  4. John Benyon Christmas Lectures: Untangling the Web
  5. Rich from Bechtle Christmas Lectures 2008, much better!

[Picture of Chris Bishop by Kaustav Bhattacharya]

July 25, 2006

AAAI’06: Highlights and conclusions

The AAAI conference finished last Thursday, here are some highlights and papers that might be worth reading if you are interested in building and / or using a more “intelligent” (and possibly semantic) web in bioinformatics.

Here are the papers or talks I enjoyed the most and hope you might also find them useful or inspiring.

  1. Unifying Logical and Statistical AI talk given by Pedro Domingos.

    Intelligent agents must be able to handle the complexity and uncertainty of the real world. Logical AI (of which the semantic web is an example) has focused mainly on the former, and statistical AI (e.g. machine learning) on the latter. The two approaches can be united, with significant benefits, some of which are demonstrated by the Alchemy system

  2. Developing an intelligent personal assistant: The CALO (Cognitive Agent at that Learns and Organises) project talk given by Karen Myers.

    CALO is a desktop assistant that learns what you do in the lab / office. Sounds spooky, but involves some interesting technology and fascinating research questions.

  3. Bookmark hierarchies and collaborative recommendation by Ben Markines, Lubomira Stoilova and Filippo Menczer.

    Describes an open-source, academically-oriented social bookmarking site where you can donate your bookmarks to science at givealink

  4. Social network-based Trust in Prioritised Default Logic by Yarden Katz and Jennifer Golbeck.

    Who and how can you trust on the Web?

  5. Google vs Berners-Lee was a memorable debate. According to Jim Hendler, Tim and Peter are reconciling their differences now

Not particularly webby, but…

…entertaining nonetheless.

  1. Stephen Muggletons talk on Computational Biology and Chemical Turing Machines, went down well but unfortunately I was stuck in a parallel track, experiencing “death by ontology”.
  2. Bruce Buchanan gave a talk What Do We Know About Knowledge. A roller-coaster ride through the last 2000+ years of human attempts to understand what knowledge is, how to represent it and why it is powerful
  3. Winning the DARPA Grand Challenge with an AI Robot called Stanley talk given by Sebastian Thrun, amazing presentation on a driving a robotic car through the desert over rough terrain. However, it doesn’t take too much imagination to think of horrific applications of this. Next year they will try to drive it from San Francisco to Los Angeles on a public freeway, and Stanley hasn’t even passed its driving test yet!

Turing’s dream

Appropriately, the conference which was subtitled Celebrating 50 years of AI finished with two talks by Lenhart K. Schubert and Stuart M. Shieber about the Turing test. The first discussed Turing’s dream and the Knowledge Challenge, the second talk asked Does the Turing Test Demonstrate Intelligence or Not? Now I’m back in Manchester, where Turing once worked, I can’t help wondering, what would Alan make of the current state of AI and the semantic web? I think there are several possibilities, he could be thinking:

  • EITHER: Fifty odd years later, they’re not still wasting time working on that Turing test are they?!
  • OR: He is smugly satisifed that he devised a test, that no machine has passed, and perhaps never will, but has provided us with a satisfactory operational definition of “intelligence” ;
  • …AND What the hell is the “Semantic Web”?

We will never know what Alan Turing would make of todays efforts to make a more intelligent web. However, that won’t stop me speculating that he would be impressed by the current uses of computers (intelligent or otherwise) to drive robots through the desert, perform all sort of computations on proteins and to search for information on this massive distributed global knowledge-base we call the “Web”. Not bad for 50 years of work, here’s to the next 50…

References

  1. Alan Turing (1950) Computing Machinery and Intelligence: The Turing TestMind 59(236):433-460
  2. Stephen H. Muggleton (2006) Exceeding human limits: The Chemical Turing MachineNature 440:409-410
  3. Stephen H. Muggleton (2006) Towards Chemical Universal Turing Machines in Proceedings on the 21st National Conference on Artificial Intelligence
  4. Picture credit: Image from Steve Jurvetson
  5. This post was originally published on nodalpoint with comments

July 21, 2006

AAAI: Dude, Where’s My Service?

GogloAs the number of bioinformatics services on the web increases, finding a tool or database that performs the task you require can be problematic. At the AAAI poster session on Wednesday, I presented our paper describing a novel solution to this problem. It uses a reasoner to “intelligently” search for web services, by semantically matching service requests with advertisements and has some advantages over comparable solutions…

I won’t go into all the gory details here but our technique extends and complements current approaches for matchmaking services. Some of the key features described in the paper are that it allows you describe to relationship(s) between the input and output of a service. E.g. What is the relationship between the input and output protein sequence of InterProScan? This relationship can help match requests for services with their adverts with higher precision and recall. I don’t mind admitting its been hard work getting this research published because a large part of the AI community use shamelessly toy and fictitious scenarios to motivate their work. Then they build incredibly complicated software stacks that are only understood by the small clique of people that designed them. When you show some of these people real-world bioinformatics services, they don’t seem to care too much, preferring to bury their heads in the sand of make-believe. There, thats got it off my chest!

So it was re-assuring when people came by the poster, listened to my speel and asked lots of questions. Ora Lassila from Nokia (one of the people responsible for hyping the whole idea up in the first place) dropped by to have a look. He was interested in adapting the technique for locating services in a registry, used by mobile devices. (I wonder if anyone out there needs BLAST on their mobile phone?!) It was good to meet Ora, and talk about semantics.

There is nothing quite like standing in front of a poster for three hours and tirelessly explaining it to complete strangers who work in disparate fields. It certainly helps to get your ideas straight. Where would we be without conferences?

References

  1. Danny Leiner (2000) Dude, Where’s My Car?
  2. Massimo Paolucci, Takahiro Kawamura, Terry Payne and Katia Sycara (2002) Semantic Matching of Web Service Capabilities
  3. Duncan Hull, Evgeny Zolin, Andrey Bovykin, Ian Horrocks, Ulrike Sattler and Robert Stevens (2006) Deciding Semantic Matching of Stateless Services in the Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06)

July 19, 2006

Blog at WordPress.com.