Skip to main content

AI is great for research 📖

An AI inside a book.

Generative AI assistants like ChatGPT or Claude are so hyped right now that it is hard to have a sensible discussion about them. Some people say that they can do anything, while others argue they cannot be trusted for anything. So I thought I should share a concrete example where I have found generative AI to be, if not perfect, at least truly helpful.

Many people expect generative AI to be good at coming up with novel ideas. It s not. In fact, is is quite bad at it. But what it is really good at is rehashing existing information in new forms.1 It is so good that it even looks like it it comes up with new stuff.

You’ve got a friend in AI #

While generative AI may not be great for writing a ground-breaking computer science algorithm, or writing a truly novel fictional story, it is really good at summarizing and combining content it has been trained on or has been given access to. This means that it is an excellent “tutor” or “researcher”. Perhaps not in the scientific sense, but in the everyday information-gathering sense.

It is like having an extremely knowledgeable friend who loves to answer questions. Sure this friend happens to be a bit confused sometimes, but given how much your friend can actually remember, a bit of confusion can be forgiven.

This friend can explain virtually anything. It is great at making connections and compare. It can give you a high level summary of a complex topic in seconds which would have taken hours of reading to come up with.

Concrete examples #

Over the last few months I’ve spent a lot of time understanding the pretty complex medical data exchange standard FHIR. I cannot overstate how much help I’ve had from ChatGPT to explain various concepts and how they fit together. And not only syntax that you can read in a spec, but to understand best practices and how the standard is typically used. Priceless!

To provide some more examples, this is a selection of actual questions I’ve asked ChatGPT the last month and gotten excellent answers.

Describe Go’s minimal version selection algorithm in a short sentence.

I want to buy a mirror ball for home use, 20 or 30 cm diameter. What speed do I want for the motor?

When did Microsoft switch from using the word “directory” to “folder”?

It seems some people dislike the term servant leadership, why?

Compared with the latest iPhone processor, how many years do I have to go back for it to computationally match the most powerful processor sold by Intel?

How fast were the plates moving when Himalayas were formed?

Explain Terraform configuration files with respect to versioning.

These are some examples chosen to show questions in diverse subjects which would otherwise have taken me much longer time to find answers to. And I don’t have to filter Google search results sprinkled with ads, sift through “SEO-optimized” articles with low signal-to-noice ratio, or jump between sites with completely different designs. It is all there in a clear and simple interface.

And those are just the initial questions. Many times the true value comes from being able to ask follow-up questions and ask it to explain areas that are still unclear.

For what tasks have you found generative AI to be most helpful?

  1. An artificial (but impressive) example of their ability to combine existing knowledge in a task that would make most humans sweat is something like “Critique Crime and Punishment by Dostoyevsky in the form of a rap battle.” ↩︎