Getting Started with LLM APIs
When Zack first saw his assistant summarize an email, he was amazed. It felt like magic: he typed a sentence, hit “run,” and out popped a clean, professional summary. But Zack wasn’t satisfied with magic. He wanted to know how it actually worked.
“If I’m going to trust this tool with my inbox,” he thought, “I should at least know what happens when I press Enter.”
So today, Zack will go deeper. He’ll discover what an API really is, what models like GPT-4o, Claude, and LLaMA can do, and how to run his very first script in Python.
By the end of this lesson, Zack (and you) will no longer see the AI assistant as a black box. You’ll see the pipes, wires, and steps that make it run.
What an API is?
Zack started with a simple question: “What’s an API?” The term API stands for Application Programming Interface. In everyday words, it’s just a way for two programs to talk to each other.
- Imagine you walk into a restaurant. You don’t cook your own food, you tell the waiter what you want.
- The waiter passes your order to the kitchen.
- The kitchen prepares the meal and hands it back to the waiter.
- The waiter delivers it to your table.
That’s exactly how an API works.
- Zack’s Python script is the customer.
- The API is the waiter.
- The AI model (like GPT-4o) is the kitchen.
- The response text is the meal.
Every time Zack runs his code, his script sends a request, the API delivers it to the model, and the model replies with text. This means the assistant doesn’t “live” on Zack’s laptop. It lives on servers run by OpenAI (for GPT-4o), Anthropic (for Claude), or Groq (for fast inference of open models like LLaMA). His laptop just sends the request and prints the reply.
Three LLM models
Zack wanted to know: “Which model should I use? Are they all the same?”
Here’s what he learned:
- GPT-4o (OpenAI):
This is the model Zack already tested. It’s fast, reliable, and versatile. It handles emails, summaries, and everyday tasks well. It also supports multiple languages and can return structured outputs if you guide it carefully. - Claude 3.5 (Anthropic):
Claude is known for its long memory. If Zack ever needs to process a long contract or dozens of emails at once, Claude might be better because it can handle huge amounts of text. - LLaMA 3 (Meta, via Groq):
LLaMA is open-source, which means Zack can even run it on his own hardware if he has a strong enough machine. Groq hosts LLaMA on super-fast chips, so you get quick responses.
Zack realized it wasn’t about choosing the “best” model. It was about picking the one that fit his task:
- For quick email summaries: GPT-4o.
- For very long documents: Claude.
- For open-source or offline needs: LLaMA.
Setting up the playground
Now it was time for Zack to actually talk to one of these models. He started with OpenAI because it was beginner-friendly.
Here’s the exact journey Zack took. If you’re following along, do this with him:
- Install Python
Zack opened his terminal and typed:
1
python --version
It showed Python 3.10.12. Perfect. If you don’t see Python, download it from python.org.
2. Create a project folder
1 2
mkdir email_assistant cd email_assistant
3. Create a virtual environment
1
python -m venv venv
- On macOS/Linux, activate with:
1
source venv/bin/activate
- On Windows:
1
venv\Scripts\activate
When Zack saw (venv) in his terminal, he knew he was inside the environment.
4. Install the OpenAI library
1
pip install openai
This gave him the OpenAI client he needed for the script.
5. Get an API key
- Zack went to platform.openai.com.
- He signed in with his account.
- On the left sidebar, he clicked API Keys.
- He pressed Create new secret key.
- A popup showed a string starting with sk-....
He copied it carefully.
- On macOS/Linux, he typed:
1
export OPENAI_API_KEY="sk-abc123..."
- On Windows PowerShell:
1
setx OPENAI_API_KEY "sk-abc123..."
- He closed and reopened his terminal so the key would be loaded.
Now Zack was ready to code.
Writing first prompt in Python
Zack created a new file called test_ai.py and wrote this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
from openai import OpenAI # Initialize the client client = OpenAI() # Send a message to GPT-4o response = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": "Summarize this email: Hi team, please send me the report by Friday."} ] ) # Print the result print(response.choices[0].message.content)
He saved it, then ran:
1
python test_ai.py
The screen showed:
1
The team is asked to send a report by Friday.
Zack smiled. His script had just “talked” to GPT-4o and got back a summary.
Plain text vs JSON
At first, Zack was happy with plain sentences. But soon he thought: “What if I want this assistant to send tasks into a to-do app? I’ll need structured data, not just sentences.”
So he changed the prompt:
1 2 3 4 5 6 7 8 9 10 11
response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": "Extract tasks from this email and return valid JSON with keys owner, action, deadline. Email: Hi team, please send me the report by Friday." } ] ) print(response.choices[0].message.content)
This time, the output looked like:
1 2 3 4 5
{ "owner": "team", "action": "send report", "deadline": "Friday" }
Now Zack had structured data he could parse with Python’s json library.
Plain text is for humans. JSON is for programs. Both are useful, and Zack learned he could switch between them just by tweaking the prompt.
Mini-exercise for you
Zack challenged himself, and you to try a simple test.
- Copy the script into a file called exercise.py.
- Replace the email text with this paragraph:
"Remote work has grown quickly in recent years. Teams rely on tools like Slack and Zoom to stay connected. While it offers flexibility, it also creates challenges in communication and team culture."
- Run the script.
You should see something like:
“Remote work is rising, supported by tools like Slack and Zoom, but it brings challenges in communication and culture.”
If you want, try asking for JSON instead:
1
"Summarize this paragraph in JSON with keys topic, benefits, challenges."
The output might look like:
1 2 3 4 5
{ "topic": "Remote work", "benefits": "Flexibility, digital tools", "challenges": "Communication, culture" }
This little exercise shows the real power of APIs. You give instructions (the prompt), and you get back exactly what you ask for—text, or structured data.
Zack’s feedback on script
After running these tests, Zack leaned back and said:
“So the assistant isn’t magic. It’s just my script sending text to a model and getting text back. That I can understand.”
He realized he didn’t need to master machine learning theory. He just needed to master the workflow:
- Write a prompt.
- Send it through the API.
- Use the output in his own app.
That’s the foundation of every AI-powered product he’d ever seen.
Conclusion
In this lesson, you walked beside Zack as he:
- Learned what APIs are (a waiter between your code and the model).
- Got familiar with the top models: GPT-4o, Claude, and LLaMA.
- Installed Python, set up an environment, and got an API key.
- Wrote his first Python script to talk to GPT-4o.
- Learned the difference between plain text and structured JSON.
- Practiced with a mini-exercise summarizing a paragraph.
With this foundation, Zack is ready for the next step: connecting these scripts to real emails and meetings. That’s where his assistant will start saving serious time.
Frequently Asked Questions
An LLM API is a way for your Python code to send text to a large language model like GPT-4o or Claude and receive text or structured data in response.
The most common choices are OpenAI’s GPT-4o, Anthropic’s Claude, and Meta’s LLaMA (often hosted on Groq servers for speed).
No. If you know basic Python, you can install the library, get an API key, and send prompts. The heavy lifting is done on the model’s servers.
Plain text is easy for humans to read, while JSON is structured data you can parse with Python and connect to apps like task managers.
Save the sample Python code, run it in your terminal, and check the output. Start with summarizing a short paragraph, then experiment with JSON formatting.
Still have questions?Contact our support team