The primary user interface for Yarkly is a chatbot. There are many reasons why so, and we will talk about it in a future series, but today I'd like to talk about how I develop it, what went wrong, and what did I learn from it.

How I approached user interface

If you would think about the chatbot, in essence, it is a vast state machine that has to be able to serve any customer query at any given time. Given so building an excellent chatbot interface is not that easy and straightforward task.

Fall in love with the initial solution

I start approaching the chatbot problem by figuring out what kind of state machine I need to understand what precisely the customer is looking for. The initial model contains dozens of steps with complex transitions. It was a fascinating thing to build, so without a doubt, I fell in love with it.


Everything seems to be clear except one little thing that bugs me the most, how to understand what exactly people are trying to tell me. The great engineer inside yell that it is impossible to write such a script that would be able to understand any free form text from chat, by the way, it is precisely what people are trying to write to the bot.


I was too naïve, so I listened up to the inner developer and tried to figure out a way how I can lead people into the flow I want. Luckily many chat platforms offer an option to suggest quick buttons to customers. It seems to be a great solution. I have to handle button clicks, no free form text, sounds like the best possible approach to handle communications.

Build it from in and out

I sit down, and over a few days, I managed to write some frameworks that handle customer input, restore the last state, try to apply input, do state transition, persistence, and so on.

Ask users for feedback

Early signs of an issue

Early on, I showed the bot to two people. One person with a more tech-savvy background had no problem with the chatbot and managed to set up everything very smoothly. However, the other one had a really tough time. She was stuck on every single step with a high frustration on how to move forward. At this point, I consider the system to be well done and just disregard feedback I don't like. It was a big mistake.

Hard findings

When I finish with a full flow implementation, I start thinking about more versatile validation. At this point, I probably spend around twenty hours of work on the chatbot flow.

There is quite a lot to consider when testing an app on real customers. In my case, it was not so easy to do because I could shadow the person using my app, but I had to know what are the obstacles and issues they are having.

Luckily there is a pretty good solution, and no one rejects it. I ask users to record their screens and talk to my chatbot. In the end, customers send videos to me, I edited it and uploaded it to the Basecamp, so I will be able to use it for future references.

Remember my idea about buttons to send commands to the bot? People do not use it at all, even if the buttons cover massive parts of the dialog screen. Users try to write something generic to the bot. Based on my research, around 80% of all the respondents sought to communicate with the machine as they would do it with a real person.

Given that fact I face a hard truth, my idea was wrong, and I had to test it earlier. I spend a significant amount of time trying to implement the user interface my way, and now I have to pay a high price for it. First of all, I will need to develop some solutions to integrate free form text recognition into the interface. The second issue is related to the general framework I designed for the bot. It is not flexible enough to accommodate this kind of feedback. I estimate overall changes to be a few days of work without counting research that had to be done to figure out what people may write to the bot.

How ought I do it instead

Obviously, despite the fact my experience was quite funny and joyful, it was not the most efficient one. Thinking backward, I see a lot of things I could do better to stay leaner and iterate faster.

Emulate product usage on a few customers

The simplest solution is to talk to people. It is better to build a flow based on real customers' expectations. There are things they are looking for, and stepping away from it created significant friction to users, which inevitably leads to poor user experience.

The vital step in this phase is talking to customers and validating their habits in regards to its usage. In the end, it should be clear how the main communication flow should look like and what are the possible deviation from it. However, it is crucial to keep in mind that what users say does not equal to what they do, so I do not consider just a verbal explanation to be sufficient to flow validation.

I could foresee a few possible scenarios of how it could be done:1. Simplest one is to pretend to be a bot and talk with customers over the chat so that they would behave more similar to real-life.2. Another approach is slightly harder, but it allows us to test even more complex assumptions. There are a lot of UX design tools, such as Figma, that help create and test task flows. Screen recording becomes very handy because it allows for tracking user frustration. Watching the video, it becomes apparent whenever customers stuck and have no idea what to do.

Extract and build the main flow

There are two possible flows to implement some ideas. The first one is based on internal feelings and guesses. The second approach is quite the opposite of the first one, and it relies mostly on the insights maker gets from the users and user testing. I chose the first one when I applied the initial version of the chatbot.

The flow in my app had to be done based on the data I get from the customer. There is no right or easy way to impose some behavior on the users, especially if it is unnatural to them. The best possible scenario I see is merely structuring, and encoding of the insights gathered on the previous step.

Re-validate on users and adjust it.

Validating and learning are never-ending stories. Users' behavior is changing all the time, people tend to adopt new habits and preferences, so it is essential to keep testing new hypotheses and ideas as often as it was possible. There should be some common sense, and usually, there is no significant need to check every single detail, but on a particular scale, every single penny adds up into millions. It is essential to stay pragmatic and use the right tools at the right time. In my opinion, if cost-wise user testing is acceptable, it should be done.


I stepped away from a well-known path and paid a modest price for my mistake. Now I have to adapt to the way how customers tend to use my product and don't force them to do it my way. I could save a significant amount of time if I tried it on the users before I code and polish it.

It is essential to listen to any feedback customers are giving. Sometimes something that sounds silly could be a good hint that will lead to much better products in the future.

Staying pragmatic with UI/UX testing should be a top priority. Often user testing could save a significant amount of time and prevent needless frustration when customers behave unexpectedly. In my experience, I more often wrong than right regarding the prediction of how exactly users will behave. Hence, it is a rare situation when my initial guess will go live without any changes and adjustments caused by insights I got user testing.

In the next series

That is all for today. Subscribe to my newsletter, and you will get the next post from my startup diary. See you next time when we will talk about choosing a technology stack for the project.