From 'Baby Face' to Balanced Design: A Workshop for Learning User Testing Through Game Updates
game designskillseducation

From 'Baby Face' to Balanced Design: A Workshop for Learning User Testing Through Game Updates

MMaya Henderson
2026-04-16
21 min read
Advertisement

Learn user testing and visual iteration by turning the Anran redesign into a hands-on game art portfolio workshop.

From 'Baby Face' to Balanced Design: A Workshop for Learning User Testing Through Game Updates

If you want to learn user testing, qualitative research, and visual iteration the way real game teams do it, the Anran redesign is a surprisingly useful case study. Blizzard’s Season 2 update is not just a cosmetic tweak; it is a live example of how player feedback can reshape a character’s identity, how a team can recover from a controversial first impression, and how designers can document the process as proof of skill. In this workshop-style guide, you will turn that kind of update cycle into a portfolio workshop you can complete on your own or with classmates, using rapid feedback loops and simple research methods. You will also see how this approach connects to broader creator workflows, from smarter publishing to structured experimentation like a design sprint.

The key idea is simple: instead of treating a redesign as a final answer, treat it as a learning system. You sketch a visual concept, test it with a few players, note what people say and how they react, revise the concept, and repeat until the design feels balanced. That process builds the same muscles employers look for in game art, UX, and production roles: asking good questions, listening without bias, making measurable changes, and explaining your decisions clearly. If you are building a career-ready portfolio, you can pair this workshop with pieces on writing bullet points that sell your work, personalizing your job search, and character insights so your final case study reads like a professional artifact rather than a class assignment.

1) Why the Anran Redesign Is a Strong Learning Case

A public controversy creates a clear design problem

Good workshop subjects need tension, and this one has it. Anran’s original look drew criticism for reading too youthful, which made the redesign question easy to frame: how do you keep the character recognizable while addressing the feedback that the face shape, proportions, or styling felt off? That is exactly the kind of problem designers face in live service games, where every update sits between artistic intent and player expectation. When you study a case like this, you are not just reviewing a character model; you are analyzing how teams manage perception, readability, and brand consistency under public scrutiny.

This makes the example especially useful for students because the problem is bounded. You do not need the full studio pipeline to learn from it, only a repeatable process for observation and revision. That is similar to how creators use content creation lessons from streaming models or how teams learn from deal-radar style content: the lesson is not the product itself, but the structure behind the product. In a classroom, portfolio review, or self-directed study session, a controversial redesign gives you a concrete problem statement to test against.

Live-service games reward iteration, not perfection

Game updates are never just one-and-done releases. They are part of an ongoing conversation with players, which means visual design must adapt after launch when new feedback reveals blind spots. This is why the Anran update is valuable as a learning model: it shows that a team can listen, revise, and communicate the changes without pretending the original version never happened. That mindset is useful in almost any creative field, from data-driven operations to analytics-first team structures, because iteration is usually stronger than speculation.

For aspiring designers, this helps replace a common myth: that strong art means your first draft must be right. In reality, professional game art often improves because someone tested assumptions early and often. An iterative workflow also reduces emotional attachment to any one version, which makes criticism easier to process. When you treat feedback as information rather than judgment, your design decisions become sharper and your portfolio story becomes more credible.

What this teaches employers about your process

Hiring teams usually want to see how you think, not only what you made. A redesign case study that includes user testing, player feedback, and visual iteration proves you can handle ambiguity. It also demonstrates that you know how to translate qualitative comments into design actions, which is a major differentiator in junior and entry-level portfolios. If you want to frame that work well, study how professionals package outcomes in fields like before-and-after examples, or how teams present measurable value in implementation guides.

In short, this workshop is not about copying a hero model. It is about documenting a repeatable method. That method becomes the portfolio asset: the artifact that shows you can take messy feedback, organize it, and improve a design in a way other people can trust.

2) Set Up the Workshop Like a Real Design Sprint

Define a narrow research question

Start with one question, not five. A strong example would be: “How can we revise a character face to feel mature, readable, and consistent with the game’s art style while preserving recognizable identity?” That is specific enough to test and broad enough to invite meaningful feedback. If you want to make the exercise more realistic, write the question the way a studio might: frame the target audience, the design constraint, and the expected outcome. This approach mirrors how teams prepare a program launch playbook or how planners build a multi-game portfolio roadmap.

Once you have the question, define the success criteria. For example, your revised concept should score higher on “looks age-appropriate,” “feels consistent with the world,” and “keeps the character recognizable at thumbnail size.” These criteria give you something more useful than vague praise. They also help you avoid design-by-committee, where every opinion gets treated as equally actionable.

Build the minimum viable prototype

You do not need a final render to start. A grayscale face study, a rough color pass, or a side-by-side sheet of three facial proportion options is enough for user testing. Keep the early prototype intentionally incomplete so testers comment on the design logic, not polish. In other words, you want response quality, not fake certainty. That is similar to how creators validate ideas using a lightweight market research playbook before investing heavily in production.

For a game art workshop, a simple prototype packet might include: one original image, two revised facial options, one mood board, and a short character description. Add one sentence explaining the intended emotional read, because testers need a reference point. This is also a good place to borrow the discipline of content upgrade decisions: you are not chasing newness for its own sake, but choosing the smallest change that improves the result.

Plan the workshop timeline

A practical sprint can fit into two to four hours, or be stretched across two class sessions. The workflow is simple: first, prepare the prototype and research questions; second, recruit 5 to 8 testers; third, collect comments; fourth, identify patterns; fifth, revise the design; and sixth, present your iteration story. This is enough to create a strong micro-portfolio piece without overcomplicating the process. If you want a template for structuring repetitive personal projects, look at 4-week block templates and adapt the logic to your creative schedule.

Pro Tip: In user testing, five thoughtful testers often reveal more useful patterns than twenty casual opinions. You are looking for repeated themes, not popularity contests.

3) Recruiting Testers and Asking Better Questions

Choose the right participants

Your testers should match the question you are asking. If you are evaluating whether the redesign reads as older, more balanced, and less controversial, then recruit people who play games, follow character art, or can at least interpret stylized visuals. You do not need a perfect demographic sample for a workshop, but you do need enough variety to catch obvious blind spots. Think of it like authenticity testing: the more careful your method, the more reliable your read.

Try to get a mix of heavy players, casual players, and one or two people who do not know the game well. Heavy players will help with identity and lore consistency, while casual viewers are often better at identifying whether the silhouette or facial expression instantly communicates the intended age and mood. This balance matters because game art must work for both fans and first-time viewers.

Use neutral prompts, not leading questions

The fastest way to ruin qualitative research is to tell people what you think before they answer. Avoid questions like “Don’t you think the new face looks more mature?” Instead ask, “What three words would you use to describe this character?” or “What do you notice first?” and “How old does this character feel to you, and why?” These prompts reveal perception without steering it. That same principle applies in sponsorship readiness and advocacy communication: the frame you use influences the quality of the response.

Write your questions in advance and keep them short. The goal is to hear the tester’s language, not yours. When testers start using phrases like “more grounded,” “less childlike,” “more confident,” or “better proportioned,” you are collecting the raw material for revision. Those words become useful design notes later.

Capture reactions as both notes and behavior

Do not only write down what people say. Pay attention to pauses, facial expressions, and whether they keep comparing versions without being prompted. A tester who hesitates before answering may be noticing a subtle mismatch they cannot fully articulate. This is where qualitative research becomes especially valuable: meaning often appears in the gap between words and reactions. If you want a broader example of translating raw material into a useful system, study step-by-step preprocessing workflows.

As you record feedback, separate observation from interpretation. For example, write “Testers said the cheeks felt younger than expected” rather than “The cheeks are wrong.” That discipline keeps your notes objective and easier to analyze. It also makes your final case study more trustworthy because readers can see exactly where the design insight came from.

4) Running the User Test and Reading the Feedback

Structure the session like a mini lab

Open by explaining the task in one sentence, then show the prototype without commentary. Ask the tester to describe what they see before you provide context. This first impression is often the most important data point because it captures the design’s immediate read. Once they have spoken freely, follow up with focused prompts about age perception, style consistency, emotional tone, and memorability. That sequencing creates a clean research flow similar to how people assess a repairable product before digging into upgrade details.

Keep the session short. Ten to fifteen minutes is usually enough for a workshop test because attention drops quickly and repetition creates noise. If you are testing multiple design versions, rotate the order between participants so the first image does not always benefit from novelty. Small controls like that make your process feel much more professional.

Look for patterns, not one-off opinions

After the session, sort comments into themes. You might notice repeated observations about facial age cues, eye spacing, jawline softness, eyebrow shape, color palette, or emotional expression. If three out of six testers describe the face as “too young,” that is a pattern worth acting on. If only one person hates the hair design but no one else mentions it, note it but do not let it dominate the revision. This is the same principle that guides operational signal analysis: not every data point deserves the same weight.

A useful trick is to count signal types. How many comments are about age read, identity consistency, visual hierarchy, and style harmony? This helps you avoid emotional overreaction to the loudest feedback. In a portfolio, showing that you weighed evidence rather than chasing every suggestion makes you look mature and design-literate.

Translate comments into design actions

Feedback only becomes valuable when it changes something. Turn each major theme into an action item, such as “Increase jaw definition by 10 percent,” “Reduce eye width,” or “Add stronger contrast between face and hair to improve readability.” These are not just artistic preferences; they are concrete experiments. If you want to see how structured action turns into clearer writing, compare that process with before-and-after bullet points and note how specificity changes impact.

Be prepared to explain why you did not implement some suggestions. That is an important skill. Sometimes a tester says “make her look older,” but the actual issue is not age; it is softness in the facial geometry. The best designers do not blindly obey feedback. They interpret it.

5) Iterating the Visual Concept Without Losing the Character

Preserve identity while adjusting perception

The hardest part of redesign is not changing the character; it is changing the right thing. If you overcorrect, you may solve one complaint while breaking recognition or personality. For Anran, the challenge is likely to rebalance proportions, expression, and styling cues so the character still feels like herself. This is why visual iteration should follow a principle: keep the strongest identity markers, refine the weak signals, and re-test both together. That logic is similar to how a creator refines format while keeping the core audience promise, as discussed in streaming-model content lessons.

Make a short list of “non-negotiables” before you edit. These might include hairstyle, armor silhouette, color family, or personality expression. Then decide which components are flexible, such as face proportions, contrast, eye shape, and accessory scale. That separation prevents the redesign from drifting away from the character’s original identity.

Use versioning to document every change

Label your concepts clearly: V1, V2, V3, and so on. Each version should include a short note about what changed and why. For example: “V2 reduces eye size and increases cheekbone structure based on testers describing the face as youthful.” This kind of documentation is gold in a portfolio because it shows reasoning, not just results. It also resembles good technical processes in fields like document preprocessing and team workflow design.

A useful portfolio rhythm is to show three versions side by side, each with an annotation about the feedback that inspired the revision. Readers should be able to follow the story in under two minutes. If your work requires a long verbal explanation, the visuals are not doing enough of the communication.

Check the design in context, not isolation

A character face can look very different when placed in a full hero lineup, a loading screen, or a small profile icon. That is why you should test the redesigned concept in at least one contextual mockup. A face that works in close-up might fail in a team roster if it loses contrast or becomes too detailed at small size. This is the same reason creators compare decisions across formats, whether they are choosing content devices or deciding how to package a campaign across channels.

Context testing also helps you think like a production artist. In games, a good design must survive animation, UI framing, and marketing use. Your workshop becomes stronger when you show that you considered all three.

6) Turning the Workshop into a Micro-Portfolio Piece

Tell the story in problem-solution-evidence format

Employers want a case study that reads like a decision log. Start with the problem, explain the method, show the evidence, and end with the final iteration. A clean structure might look like this: “The original design was perceived as too youthful. I ran five quick user tests, identified age-read and facial proportion as recurring concerns, and revised the concept to improve maturity and visual balance.” That format is concise, credible, and easy to scan. It follows the same logic as strong professional summaries in business implementation guides.

Include one image of the original, one of the test sheet, and one of the final revision. Add short captions that explain what each image proves. Captions are essential because they help non-designers understand the link between feedback and change. Without captions, the case study may look polished but not informative.

Show your research artifacts, not just the final art

Portfolios become much stronger when they include the messy middle. That can mean sticky-note themes, interview notes, a small spreadsheet of feedback, or a coded theme list. These artifacts demonstrate that your design decisions were grounded in evidence. They also make you memorable because many applicants show only final renders and skip process. If you want to improve how you present process work, study document-to-decision workflows and results-focused writing.

Do not overproduce the artifacts. A clean one-page research summary is enough if it is honest and readable. Your job is not to simulate a giant studio; it is to show that you can think like one.

Write a reflection that sounds like a junior professional, not a student apologizing for their work

Reflection should sound grounded and specific. Instead of saying “I learned a lot,” say “I learned that small changes in facial proportion had a bigger impact on perceived age than I expected, so I shifted from general style changes to targeted structural edits.” That sentence tells an employer you can extract a design principle from experience. It also shows growth, which is what the best portfolio pieces communicate. For help shaping that career narrative, see personalized job search guidance and think about how each project fits a broader career direction.

7) A Practical Comparison: Common Redesign Moves and What They Solve

The table below compares typical visual adjustments you might make during a character redesign workshop. Use it to translate feedback into design decisions rather than guessing at fixes.

Design ChangeWhat It Usually ImprovesRisk If OverdoneBest Used When Feedback Says...Workshop Note
Reduce eye sizeMakes the face feel older or more groundedCan remove expressiveness“Looks too young” or “too cute”Test alongside eyebrow shape so the expression does not flatten
Sharpen jawlineStrengthens maturity and silhouetteCan make the face feel harsh“Too soft” or “baby-faced”Use subtle changes first; small edits often outperform dramatic ones
Increase contrast around the faceImproves readability at small sizesMay create visual noise“Hard to notice in the roster”Check in both close-up and UI thumbnail contexts
Adjust hair framingShapes identity and directs attentionCan overpower facial cues“The face doesn’t stand out”Useful when the hairstyle competes with facial proportions
Refine mouth and cheek proportionsChanges age perception without changing styleCan break likeness if pushed too far“Looks younger than intended”Often the most efficient fix in a short design sprint

This comparison is useful because it turns subjective criticism into a practical revision map. If your workshop includes several testers, you can even track which visual fixes reduce the most repeated concerns. That makes your final outcome easier to defend and easier to present in a portfolio review.

8) Common Mistakes to Avoid in User Testing and Visual Iteration

Testing too many variables at once

If you change the face, hairstyle, outfit, and color palette all in one pass, you will not know what caused the improvement. This is one of the most common beginner mistakes. The simplest way to avoid it is to isolate changes by version. Think of it like troubleshooting a system: if everything changes, nothing is measurable. That principle also appears in infrastructure decision guides and other technical workflows where clarity depends on controlled variables.

Confusing agreement with validation

Just because people say they like a design does not mean the design is solving the stated problem. A tester might like the original face but still agree it reads too young. Another might dislike the redesign simply because it is less familiar. Your job is to separate comfort from fit. This is why qualitative research is so powerful: it helps explain the why behind reactions, not just the reactions themselves.

Skipping documentation because the change seems obvious

What feels obvious today will not feel obvious when you revisit the project six months later. Write down why you changed something, what feedback triggered the change, and what you still want to test next. Documentation is not extra work; it is the thing that turns a decent exercise into a portfolio asset. Without it, the story disappears and only the image remains.

9) Extend the Workshop Into a Career-Building Habit

Repeat the process across different character types

Once you complete one redesign workshop, repeat it with another character archetype: a villain, a support hero, a mascot, or a non-human character. The repetition helps you compare how feedback varies across styles. It also gives you more material for a portfolio series, which is stronger than a one-off. Over time, you will build a library of process work that shows range, not just a single polished illustration. That approach is similar to building a smart publishing system, like the one explored in volatility calendar planning.

Use the project to practice creative communication

Designers are often evaluated on how clearly they can explain choices to artists, producers, and non-specialists. A workshop like this gives you practice turning messy ideas into clear language. You will learn to say things like “I adjusted the jawline because testers repeatedly described the face as youthful” instead of “I made it look better.” That clarity is useful in interviews, critique sessions, and team meetings. It also aligns with the communication discipline behind sponsorship readiness and player-friendly brand messaging.

Build a review-ready narrative

When you finish, summarize the project in three parts: what problem existed, how you tested it, and what changed after iteration. If you can explain those three parts in under 90 seconds, you have a strong interview story. Keep the language simple and evidence-based. A reviewer should come away thinking, “This person knows how to observe, test, revise, and document.” That is the real career value of the workshop.

Pro Tip: A strong portfolio piece is not the prettiest image. It is the clearest demonstration that you can solve a design problem using evidence, judgment, and iteration.

10) Final Takeaway: Learning User Testing by Designing Like a Team

What the Anran case teaches beyond one character

The Anran redesign is useful because it shows how public feedback can become design direction. It reminds aspiring game designers that visual work is rarely static and that live-service games demand ongoing listening. More importantly, it demonstrates that player feedback can be transformed into a structured process instead of a vague reaction pile. That is the heart of user testing: collect what people notice, identify the pattern, test a change, and see whether the next version better solves the problem.

When you frame the exercise as a workshop, you also gain something else: a portfolio piece that proves you can think and communicate like a professional. Your final deliverable is not only a better character concept, but a documented mini-study of how good design emerges through iteration. For students and aspiring game artists, that combination is powerful because it links craft, research, and career development in one project.

Where to go next

If you want to keep building this skill, pair the workshop with resources on research, presentation, and production mindset. Explore how teams structure evidence in analytics-first templates, how creators refine output through content strategy lessons, and how planners use portfolio prioritization to decide what to improve first. The more you practice this cycle, the more naturally you will move from opinion to observation, and from observation to design action.

FAQ

How many testers do I need for a workshop like this?

For a quick portfolio workshop, five to eight testers is usually enough to surface repeated themes. You are not trying to produce statistically perfect research; you are trying to identify clear patterns that can guide an iteration. If you have more time, you can add another round after revising the concept.

What if testers disagree with each other?

That is normal and often useful. Look for the reason behind the disagreement: one person may focus on lore, another on silhouette, and another on emotional tone. Use recurring themes to guide your changes, and note the disagreement in your case study so readers see that you handled mixed feedback thoughtfully.

Do I need advanced art skills to do this project?

No. The workshop is about process, not perfect rendering. Even rough sketches, grayscale studies, or simple digital mockups can be effective if your testing and documentation are strong. In many portfolios, a clear thought process is more persuasive than a polished but unexplained final image.

How do I make this look like a real portfolio piece?

Show the problem, your method, the evidence, and the outcome. Include version history, short notes on what changed, and one or two research artifacts. Keep the layout clean and make sure a reviewer can understand the project in under two minutes.

What should I do if the redesign still feels wrong after testing?

Return to the problem statement and identify whether the issue is actually the face, the proportions, the costume, or the context presentation. Sometimes the first hypothesis is wrong. Another round of testing with a narrower change often reveals the real issue and gives you a stronger final answer.

Advertisement

Related Topics

#game design#skills#education
M

Maya Henderson

Senior SEO Content Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-04-16T15:31:50.749Z