Prompt Engineering

What I learned building Resumly.pro

2025-07-10

If you missed my last post on Resumly.pro launch, check it out here.

AI generated image of person making resume

Image generated by Gemini

The Blank Page and the AI Co-Pilot

Crafting the perfect resume is a universal struggle. Especially when you're not sure what experience to include or how to present it. Enter Resumly.pro, my senior capstone project, which uses Google's Gemini to tackle this challenge head-on. I built an AI resume builder that scans a job application, extracts key requirements, and generates a tailored resume in seconds.

Resumly.pro is a full-stack web application made with React (w/ Next.js), TypeScript, and Tailwind CSS for the frontend, and a Python Flask backend that integrates with the Gemini API and web scraping.

Simply connecting AI isn't enough. The real challenge was figuring out how to ask the AI for what I wanted. This is the art and science of prompt engineering. In this post, I'll show you the evolution of my methodology, from its beginnings to a refined instruction that gets powerful, and professional results.

AI generated image of a prompt engineering diagram.

Image generated by Gemini

What is Prompt Engineering?

Before we dive in, I wanted to clarify what I mean by "prompt engineering."

In my Computer Science degree studies, and emphasis in Machine Learning, I have built what I like to call a GPT-1. It was a very basic GPT-like model that was trained on a few books from a single author. It used a RNN (Recurrent Neural Network) architecture, and was a blast learning and building.

That being said it was nothing compared to the power of modern AI like Gemini 2.5 Pro/Flash models. In building that GPT-1 I learned that LLMs are not magic, but a lot of math, and linear algebra. A popular programming YouTuber described them as a Word Calculator. You give it prompt (input) and it gives you a likely response (output) based on the patterns it learned from the training data.

The problem with this is that the patterns it learned are not always what you want, or it is very average. Because LLMs are very complex statistical model, they will give you the most likely response based on the patterns it learned. If we look at a normal distribution, the most likely response is the mean. But we don't always want the mean, we want the best response. This is where prompt engineering comes in.

My ML Professor, Adam Hayes, always said:

Garbage in, garbage out.

Prompt engineering is the process of crafting the input to an LLM in a way that guides it to produce the desired output. It's about understanding how the model works and how to leverage its strengths while mitigating its weaknesses. This involves experimenting with different phrasings, structures, and contexts to find the most effective way to communicate your intent.

Defining the Goal

This first step is defining the goal for the LLM and what you want it to achieve. I have found it best to state the role and mission at the very start of the prompt. Here is an example from one of Resumly.pro's prompts:

Role: You are an elite Resume Strategist and Career Storytelling Expert with deep understanding of applicant tracking systems (ATS) and hiring psychology.

Mission: Craft a compelling, strategically optimized resume that positions the candidate as the ideal fit for their target role while authentically representing their professional journey.

Applicant Tracking Systems (ATS) are software applications that scan resumes for keywords and formatting to determine if a candidate is a good fit for a job. They are used by many companies to filter out unqualified candidates before the recruiter or HR staff even see the resume.

After defining the role and mission I define what a good resume bullet point looks like. I originally was going to provide examples, but because of the vast number of different roles and industries, I found it best to define the characteristics of a good bullet point. This prevents the model from focusing in too much on an example, and allows for some creativity. Here is an example:

Transform descriptions using the IMPACT Method:

- Initiate with powerful action verbs (Led, Architected, Optimized, Pioneered)
- Measure outcomes with quantifiable results when possible
- Position achievements within business context
- Align language with target role requirements
- Create compelling narratives that demonstrate growth
- Tailor messaging to resonate with hiring managers

I found that this, along with the rest of the prompt, provided enough context for the LLM to understand what I wanted it to do. It also allowed me to easily change the characteristics of a good bullet point without having to rewrite the entire prompt.

AI generated image of a robot painting

Image generated by Gemini

Allowing the LLM to be Creative

One important aspect of this project, was I did not want the LLM to fabricate information. Instead, I wanted it to use the information it had been trained on to generate creative and relevant responses. This meant providing it with enough context and guidance to understand the boundaries of the task while still allowing for creative expression.

To achieve this, I provided the LLM with a set of guidelines and constraints, but also encouraged it to think outside the box. For example I gave it instruction to "Elevate language sophistication while maintaining authenticity", "Strategically emphasize accomplishments that mirror job requirements", and "Synthesize related responsibilities into powerful, cohesive statements", among many others.

This helped a lot and was instrumental in guiding the LLM's creative process while keeping it anchored to the task at hand, and preventing it from going off the rails and making up false information about the user. Additionally, avoiding specific language like "do not fabricate" or "do not lie" helped the LLM to focus on the task without feeling constrained by negative instructions.

From Asking to Requiring JSON

One of the most important aspects of prompt engineering is understanding how to format your input to get the desired output. Another factor is how can you use the output in a structured way. In the case of Resumly.pro, I needed the data to be returned in a structured JSON format so that I could be stored in a database and used to populate my pre-made resume templates.

At first, to achieve this, I was using a simple instruction to instruct the model to return the data in a JSON format. For example:

Return the data in a JSON format with the following structure:

{
  "bullet_points": [
    {
      "text": "Bullet point text",
      "context": "Context of the bullet point"
    }
  ]
}

This worked for a long time. There was about a 1 in 50 chance that the model would not return the data in the correct format. It would forget to escape a double quote, or forget a closing brace or comma. This was frustrating, but I was able to work around it by using a try/catch block to catch the error and retry the request.

However, I found that this was not a reliable solution, and I needed to find a way to ensure that the model would always return the data in the correct format. This is when I learned about a structured output that can be used.

Structured Output

Structured output is something that I supported both by Gemini, and ChatGPT, and I am sure other LLMs. This is from the Gemini docs:

You can configure Gemini for structured output instead of unstructured text, allowing precise extraction and standardization of information for further processing.

This was a game changer because it allowed me to define a schema for the output that the model would follow. The best part is the model is constrained to that output more strictly than before. This means that it will always return the data in the correct format, and I don't have to worry about it forgetting to escape a double quote or forgetting a closing brace or comma.

It was easy, I converted some of my TypeScript types into Python classes, and fed that into the model as a schema. Here is an example from the Gemini docs:

from google import genai
from pydantic import BaseModel

# Defining the schema for the structured output
class Recipe(BaseModel):
    recipe_name: str
    ingredients: list[str]

client = genai.Client()
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="List a few popular cookie recipes, and include the amounts of ingredients.",
    config={
        "response_mime_type": "application/json",
        # Pass the schema to the model
        "response_schema": list[Recipe],
    },
)

Conclusion

Building Resumly.pro wasn't just about integrating a powerful AI model; it was a deep dive into the fascinating, often finicky, world of prompt engineering. From initially defining the AI's role and mission to carefully sculpting its creative boundaries and, finally, to the game-changing shift to structured JSON output, each iteration brought Resumly.pro closer to its goal of crafting perfectly tailored resumes. It truly underscored the idea that even with incredible word calculators, the quality of the output is directly proportional to the thoughtfulness of the input.

Building Resumly.pro taught me that prompt engineering is less about finding a magic formula and more about a continuous process of refinement, experimentation, and understanding the nuances of how these powerful models interpret our intent. As AI continues to evolve, so too will the art and science of guiding it. This blend of human design and AI capability is what truly unlocks the potential for applications like Resumly.pro, transforming what was once a "blank page" struggle into an effortless, personalized experience.

Questions?

Thanks for reading! I'd love to hear about your own prompt engineering challenges and successes. Let me know about any questions or experiences you've had with LLMs. Contact me here.