UX Complexity on Small Projects: Make the Cost Visible

Small projects rarely fail because the team does not care about design.

They fail because complexity grows quietly.

One extra setting. One extra state. One extra button. One extra "temporary" variant. One extra edge case handled differently from the rest of the product.

None of these decisions looks dangerous alone.

Together, they turn a simple product into something nobody can fully explain anymore.

That is why design complexity needs to be visible even on small projects. Not with a heavyweight process. Not with a full design system. Not with three weeks of research before every button moves.

Just enough structure to see when the interface is getting more expensive to understand, maintain, and use.

What to track

For a small project, I would track three things.

1. Interface inventory

Keep a simple inventory of the main screens, components, and concepts in the product.

Not a formal design system. Not a beautiful documentation site. Just a living list.

For each area, write down:

  • what screens exist;
  • what components are reused;
  • what concepts the user has to understand;
  • what terminology appears in the UI;
  • what variants already exist.

The goal is not documentation for its own sake.

The goal is to avoid discovering, three months later, that the same thing is called "workspace," "project," and "environment" depending on which screen you opened.

That kind of inconsistency feels small while it is being introduced. It becomes expensive once users start learning the wrong mental model.

2. Flow list

List the main user flows.

For example:

  • create an account;
  • create a project;
  • invite a user;
  • upload a file;
  • configure an integration;
  • recover from a failed payment;
  • delete something important.

Then mark which flows are primary, which are secondary, and which are edge cases.

This matters because teams often design the happy path and accidentally invent the edge cases in production.

The flow list keeps the product honest.

If a feature adds a new path, it should be visible. If a feature creates a second way to do something that already exists, that should be visible too.

Not forbidden.

Visible.

That difference matters.

3. States audit

Every meaningful screen has states.

At minimum:

  • empty;
  • loading;
  • success;
  • error;
  • permission denied;
  • partially configured;
  • archived or disabled;
  • no results.

Small teams often design only the "full and working" version of a screen.

That is the easiest state to design and the least interesting one.

The painful UX usually lives elsewhere:

  • the integration failed;
  • the list is empty;
  • the user has no permission;
  • the search returned nothing;
  • the API is slow;
  • the object was deleted by someone else;
  • the feature exists, but the user has not configured it yet.

A states audit is not glamorous. It is also one of the cheapest ways to prevent the product from feeling broken.

Once you can see complexity, decide what to cut

Once you can see the complexity, the next question is where to spend it — and what to cut.

A few patterns consistently save more than they cost.

Progressive disclosure

Hide the long tail of options behind a "more" surface.

The 80% of users who do not need those options never see them. The 20% who do need them can still find them in one consistent place.

The important part is consistency.

A "more" menu is useful when it behaves like a deliberate secondary surface. It becomes junk when every team uses it as a place to hide decisions they did not want to make.

Sensible defaults

Every good default is a decision the user does not have to make.

Spend disproportionate time on defaults.

They earn it.

A bad default forces every user to understand the product before they can use it. A good default lets the user move forward and adjust later when they actually know what they care about.

One primary way to do each thing

Prefer one primary way to do each thing.

If the app has three equally visible ways to mark something as done, two of them are probably confusing the first-time user.

Pick the primary path. Make it obvious.

Secondary paths can still exist as shortcuts, contextual actions, or power-user affordances. But they should not compete with the main path for attention.

People who liked the removed option will tell you.

People who were confused by the extra options will usually just stop using the app.

That asymmetry is annoying, but real.

Do not design for the power user first

Power-user features are a tax on every other user unless they are isolated well.

Advanced filters, bulk actions, shortcuts, command palettes, custom fields, and admin controls can all be useful. But they should not dominate the basic flow before the basic flow works.

Add them once the core path is solid.

Then put them behind a surface that does not intrude on the simple path.

The test is straightforward:

Can a new user complete the basic task without understanding the advanced feature exists?

If yes, fine.

If no, the advanced feature is not advanced. It is now part of onboarding.

That is a much higher bar.

Treat empty states as onboarding

Empty states are not decoration.

They are onboarding at the exact moment the user needs it.

A good empty state explains:

  • what this area is for;
  • why it is empty;
  • what the user can do next;
  • what will happen after they do it.

A bad empty state is a blank screen with a vague button.

The first time a user opens a feature, the empty state is often the product's best chance to explain itself. Do not waste that moment.

Things I would not bother with on a small project

Some activities sound related to design quality but are usually not worth the effort at small scale.

A formal design system

A glossary file and a small component inventory are enough until you have multiple product surfaces or multiple teams shipping UI independently.

A real design system is a product.

It needs ownership, maintenance, contribution rules, documentation, versioning, and adoption work.

If nobody has time for that, do not pretend you have a design system. Keep a practical inventory instead.

Heatmaps and session replay tools

These can be useful at scale.

At small scale, they are often noisy and expensive to interpret.

A heatmap can tell you where someone clicked. It cannot reliably tell you what they thought they were doing, why they hesitated, or what they expected to happen.

For that, talk to users.

Revolutionary technology. Works surprisingly well.

A/B testing every decision

With low traffic, the math usually does not work.

You will wait too long, learn too little, or fool yourself with noise.

Make the design choice. Ship it. Watch support. Talk to users. Look at activation and retention if you have enough volume to make those numbers meaningful.

Do not cosplay as a growth team if the product has twelve active users and one of them is your co-founder.

Quantitative UX research by default

Three good interviews can teach you more than a fifty-response survey when the team is small.

Find three users. Talk to them. Watch where they struggle. Ask what they expected.

Small teams often have direct access to users. That is an advantage. Use it before building process around not using it.

The point of heavier research and analytics at larger scale is to add evidence where direct contact and judgment stop being enough.

On a small team, judgment is often still close to the user.

Do not gold-plate the process to look like a bigger company.

The weekly complexity check

A small project does not need a UX committee.

It does need a recurring moment where somebody asks what complexity was added.

Before shipping a meaningful feature, ask:

  • Did we add a new concept?
  • Did we add a new state?
  • Did we add another way to do something that already existed?
  • Did we introduce new terminology?
  • Did we create empty, loading, error, and permission states?
  • Did we add a component variant that now needs to be maintained?
  • Did we make the primary path clearer or noisier?

This can happen in a design review, sprint planning, pull request, or retro.

The location matters less than the habit.

What this is really about

For a small project, complexity analysis on the design side is not about catching bad design at the gate.

It is about keeping the trend visible in places the team already looks:

  • the Figma file;
  • the next sprint plan;
  • the pull request;
  • the retro conversation;
  • the support inbox.

The inventory keeps the interface honest with itself.

The flow list keeps unloved edge cases from being invented in production.

The states audit keeps empty, error, loading, and permission paths from becoming someone else's surprise.

The discipline part is harder.

There is no tool that prevents a stakeholder from asking for "just one more thing."

But making the cost visible is the job.

That one more thing might mean:

  • a new state on three screens;
  • a new variant of two components;
  • another branch in onboarding;
  • a new permission rule;
  • a new empty state;
  • a small rethink of the side panel.

Once that cost is visible, the conversation becomes real.

Not:

Can we add this?

But:

Is this worth the extra complexity?

That is a better conversation.

Complexity always grows.

The work is staying in front of it.