Reliability and Resiliency for Cloud Connected Applications

Building cloud connected applications that are reliable is hard. At the heart of building such a system is a solid architecture and focus on resiliency. We're going to explore what that means in this post.

When I first started development on FastBar a cashless payment system for events, there were a few key criteria that drove my decisions for the overall architecture of the system.

Fundamentally, FastBar is a payment system designed specifically for event environments. Instead of using cash or drink tickets or clunky old credit card machines, events use FastBar instead.

There are 2 key characteristics of the environment in which FastBar operates, and the system that provide that drive almost all underlying aspects of the technical architecture: internet connectivity sucks, and we're dealing with people's money.

Internet connectivity at an event sucks

Prior to starting FastBar, I had a side business throwing events in Seattle. We'd throw summer parties, Halloween parties, New Year's Eve parties etc… In 10 years of throwing events, I cannot recall a single event where internet worked flawlessly. Most of the time it ranged from "entirely useless" to "ok some of the time".

At an event, there are typically 2 choices for getting internet connectivity:

  1. Rely on the venue's in-house WiFi

  2. Use the cellular network, for example a hotspot

Sometimes the venue's WiFi would work great in an initial walkthrough… and then 2000 people would arrive and connectivity goes to hell. Other times it would work great in certain areas of the venue, then we tested it where we wanted to setup registration, or place a bar, only to get an all too familiar response from the venue's IT folks: "oh, we didn’t think anyone would want internet there".

Relying on hotspots was just a bad: at many indoor locations, connectivity is poor. Even if you're outdoors with great connectivity, add a couple of thousand people to that space, each of them with smartphones hungry for bandwidth so they can post to Facebook/Instagram/Snapchat, or their phone just decides now is a great time to download that latest 3Gb iOS update in the background.

No matter what, internet connectivity at event environments is fundamentally poor and unreliable. This is something that isn't true in a standard retail environment like a coffee shop or hairdresser where you'd deploy a traditional point of sale, and it would have generally reliable internet connectivity.

We're dealing with people's money

For the event, the point of sale system is one of the most critical aspects - it effects attendee's ability to buy stuff, and the events ability to sell stuff. If the point of sale is down, attendees are pissed off and the event is losing money. Nobody wants that.

Food, beverage and merchandise sales are a huge source of revenue for events. For some events, it could be their only source of revenue.

In general, money is a very sensitive topic for people. Attendees have an expectation that they are charged accurately for the things they purchase, and events expect the sales numbers they see on their dashboard are correct, both of which are very reasonable expectations.

Reliability and Resiliency

Like any complicated, distributed software system, there are many non-functional requirements that are important to create something that works. A system needs to be:

  • Available

  • Secure

  • Maintainable

  • Performant

  • Scalable

  • And of course reliable

Ultimately, our customers (the events), and their customers (the attendees), want a system that is reliable and "just works". We achieve that kind of reliability by focusing on resiliency - we expect things will fail, and design a system that will handle those failures.

This means when thinking about our client side mobile apps, we expect the following:

  • Requests we make over the internet to our servers will fail, or will be slow. This could mean we have no internet connectivity at the time and can't event attempt to make a request to the server, or we have internet, but the request failed to get to the server, or the request made it to our server but the client didn't get the response

  • A device may run out of battery in the middle of an operation

  • A user may exit the app in the middle of an operation

  • A user may force close the app in the middle of an operation

  • The local SQLite database could get corrupt

  • Our server environment may be inaccessible

  • 3rd party services our apps communicate with might be inaccessible

On the server side, we run on Azure and also depend on a handful of 3rd party services. While generally reliable, we can expect:

  • Problems connecting to our Web App or API

  • Unexpected CPU spikes on our Azure Web Apps that impact client connectivity and dramatically increase response time for requests

  • Web Apps having problems connecting to our underlying SQL Azure database or storage accounts

  • Requests to our storage account resources being throttled

  • 3rd party services that normally respond in a couple of hundred milliseconds taking 120+ seconds to respond (that one caused a whole bunch of issues that still traumatize me to this day)

We've encountered every single one of these scenarios. Sometimes it seems like almost everything that can fail, has failed at some point, usually at the most inopportune time. That's not quite true, I can mentally create some nightmare scenarios that we could potentially encounter in the future, but these days we're in great shape to withstand multiple critical failures across different parts of our system and still retain the ability to take orders at the event, and have minimal impact to attendees and event staff.

We've done this by focusing on resiliency in all parts of the system - everything from the way we architect to the details of how we make network requests and interact with 3rd party services.

Processing an Order

To illustrate how we achieve resiliency, and therefore reliability, let's take a look at an example of processing and order. Conceptually, it looks like this:

FastBar - Order Processing - Conceptual.png

The order gets created on the POS and makes an API request to send it to the server. Seems pretty easy, right?

Not quite.

Below is a highly summarized version of what actually happens when an order is placed, and how it flows through the system:

There is a lot more to it that just a simple request-response. Instead, it's a complicated series of asynchronous operations and a whole bunch of queues in between which help us provide a system that is reliable and resilient.

On the POS App

  1. The underlying Order object and associated set of OrderItems are created and persisted to our local SQLite database
  2. We create a work item and place it on a queue. In our case, we implement our own queue as a table inside the same SQLite database. Both steps 1 and 2 happen transactionally, so either all inserts succeed, or none succeed. All of this happens within milliseconds, as it's all local on the device and doesn't rely on any network connectivity. The user experience is never impacted in case of network connectivity issues
  3. We call the synchronization engine and ask it to push our changes
    1. If we're online at the time, the synchronization engine will pick up items from the queue that are available for processing, which could be just the 1 order we just created, or there could be many orders that have been queued and are waiting to be sent to the server. For example if we were offline and have just come back online. Each item will be processed in the order that it was placed on the queue, and each item involves its own set of work. In this case, we're attempting to push this order to our server via our server-side API. If the request to the server succeeds, we'll delete the work item from the queue, and update the local Order and OrderItems with some data that the server returns to us in the response. This all happens transactionally.
    2. If there is a failure at any point, for example a network error, or a server error, we'll put that item back on the queue for future processing
    3. If we're not online, the synchronization engine can't do anything, so it returns immediately, and will re-try in the future. This happens either via a timer that is syncing periodically, or after another order is created and a push is requested
    4. Whenever we make any request to the server that could update or create any data, we send a IndempotentOperationKey which the server uses to determine if the request has been processed already or not

The Server API

  1. Our Web API receives the request and processed it
    1. We make sure the user has permissions to perform this operation, and verify that we have not already processed a request with the same IdempotentOperationKey the client has supplied
    2. The incoming request is validated, and if we can, we'll create an Order and set of OrderItems and insert them into the database. At this point, our goal is to do the minimal work possible and leave the bulk of the processing to later
    3. We'll queue a work item for processing in the background

Order Processor WebJob

  1. Our Order Processor is implemented as an Azure WebJob and runs in the background, constantly looking at the queue for new work
  2. The Order Processor is responsible for the core logic when it comes to processing an order, for example, connecting the order to an attendee and their tab, applying any discounts or promotions that may be applicable for that attendee and re-calculating the attendee's tab
  3. Next, we want to notify the attendee of their purchase, typically by sending them an SMS. We queue up a work item to be handled by the Outbound SMS Processor

Outbound SMS Processor WebJob

  1. The Outbound SMS processor handles the composition and sending of SMS messages to our 3rd party service for delivery, in our case, Twilio
  2. We're done!

That's a lot of complexity for what seems like a simple thing. So why would we add all of these different components and queues? Basically, it’s necessary to have a reliable and resilient system that can handle a whole lot of failure and still keep going:

  • If our client has any kind of network issues connecting to the server

  • If our client app is killed in any way, for example, if the device runs out of battery, or if the OS decided to kill our app since we were moved to the background, or if the user force quits our app

  • If our server environment is totally unavailable

  • If our server environment is available but slow to respond, for example, due to cloud weirdness (a whole other topic), or our own inefficient database queries or any number of other reasons

  • If our server environment has transitory errors caused by problems connecting with dependent services, for example, Azure SQL or Azure storage queues returning connectivity errors

  • If our server environment has consistent errors, for example, if we pushed a new build to the server that had a bug in it

  • If 3rd party services we depend on are unavailable for any reason

  • If 3rd party services we depend on are running slow for any reason

Asynchronicity to the Max

You'll notice the above flow is highly asynchronous. Wherever we can queue something up and process it later, we will. This means we're never worried if whatever system we're talking to is operating normally or not. If it's alive and kicking, great, we'll get that work processed quickly. If not, no worries, it will process in the background at some point in the future. Under normal circumstances, you could expect an order to be created on the client and a text message received by the customer within seconds. But, it could take a lot longer if any part of the system is running slowly or down, and that's ok, since it doesn't dramatically affect the user experience, and the reliability of the system is rock solid. .

It's also worth noting that all of these operations are both logically asynchronous, and asynchronous at the code level wherever possible.

Logically asynchronous meaning instead of the POS order creation UI directly calling the server, or, on the server side, the request thread directly calling a 3rd party service to send an SMS, these operations get stored in a queue for later processing in the background. Being logically asynchronous is what gives us reliability and resiliency.

Asynchronous at the code level is different. This means that wherever possible, when we are doing any kind of I/O we utilize C#’s async programming features. It's important to note that underlying code being asynchronous doesn’t actually have anything to do with resiliency. Rather, it helps our components achieve higher throughput since they're not tying up resources like threads, network sockets, database connections, file handles etc… waiting for responses. Asynchronity at the code level is all about throughput and scalability.

Conclusion

When you're building mobile applications connected to the cloud, reliability is key. The way to achieve reliability is by focusing on resiliency. Expect that everything can and probably will fail, and design you system to handle these failures. Make sure your system is highly logically asynchronous and queue up work to be handled by background components wherever possible.

FastBar's Technical Architecture

Previously, I've discussed the start of FastBar, how the client and server technology stacks evolved and what it looks like today (read more in part 1, part 2 and part 3 of that series).

As a recap, here's what the high-level components of FastBar look like:

FastBar Components - High Level.png

Let's dive deeper.

Architecture

So, what does FastBar’s architecture look like under the hood? Glad you asked:

Client apps:

  • Registration: used to register attendees at an event, essentially connecting their credit card to a wristband and all of the associated features that are required at a live event

  • Point of Sale: used to sell stuff at events

These are both mobile apps built in Xamarin and running on iOS devices.

The server is more complicated from an architectural standpoint, and is divided into the following primary components:

  • getfastbar.com - the primary customer facing website. Built on Squarespace, this provide primarily marketing content for anyone wanting to learn about FastBar

  • app.getfastbar.com - our main web app which provides 4 key functions:

    • User section - as user, there are a few functions you can perform on FastBar, such as creating an account, adding a credit card, updating your user profile information and if you've got the permissions, creating events. This section is pretty basic

    • Attendee section - as an attendee, you can do things like pre-register for an event, view your tab, change your credit card, email yourself a receipt and adjust your tip. This is the section of the site that receives the most traffic

    • Event Control Center section - this is by far the largest section of the web app, it's where events can be fully managed: configuring details, connecting payment accounts, configuring taxes, setting up pre-registration, managing products and menus, viewing reports and downloading data and a whole lot more. This is where event organizers and FastBar staff spend the majority of their time

    • Admin section - various admin related features used by FastBar support staff. The bulk of management related to a specific event, they would do from the Event Control Center if acting on behalf of an event organizer

  • api.getfastbar.com - our API, primarily used by our own internal apps. We also open up various endpoints to some partners. We don’t make this broadly accessible publicly yet because it doesn't need to be. However, it’s something we may decide to open up more broadly in the future

The main web app and API share the same underlying core business logic, and are backed by a variety of other components, including:

  • WebJobs:

    • Bulk Message Processor - whenever we're sending a bulk message, like an email or SMS that is intended to go to many attendees, the Bulk Message Processor will be responsible for enumerating and queuing up the work. For example, if we were to send out a bulk SMS to 10,000 attendees of the event, whatever initiates this process (the web app or the API) will queue up a work item for the Bulk Message Processor that essentially says "I want to send a message to a whole bunch of people". The Bulk Message Processor will pick up the message and start enumerating 10,000 individual work items that it will queue up for processing by the Outbound SMS Processor, a downstream component. The Outbound SMS Processor will in turn pick up each work item and send out individual SMSs

    • Order Processor - whenever we ingest orders from the POS client via the API, we do the minimal amount of work possible so that we can respond quickly to the client. Essentially, we're doing some initial validation and persisting the order in the database, then queuing a work item so that the Order Processor can take care of the heavy lifting later, and requests from the client are not unnecessarily delayed. This component is very active during an event

    • Outbound Email Processor - responsible for sending an individual email, for example as the result of another component that queued up some work for it. We use Mailgun to send emails

    • Outbound Notification Processor - responsible from sending outbound push notifications. Under the covers this uses Azure Notification Hub

    • Outbound SMS Processor - responsible for sending individual SMS messages, for example a text message update to an attendee after they place an order. We send SMSs via Twilio

    • Sample Data Processor - when we need to create a sample event for demo or testing purposes, we send this work to the Sample Data Processor. This is essentially a job that a admin user may initiate from the web app, and since it could take a while, the web app will queue up a work item, then the Sample Data Processor picks it up and goes to work creating a whole bunch of test data in the background

    • Tab Authorization Processor - whenever we need to authorize someone's credit card that is connected to their tab, the Tab Authorization Processor takes care of it. For example, if attendees are pre-registering themselves for an event week before hand, we vault their credit card details securely, and only authorize their card via the Tab Authorization Processor 24 hours before the event starts

    • Tab Payment Processor - when it comes time execute payments against a tab, the Tab Payment Processor is responsible for doing the work

    • Tab Payment Sweeper - before we can process a tab's payment, that work needs to be queued. For example, after an event, all tabs get marked for processing. The Tab Payment Sweeper runs periodically, looking for any tabs that are marked for processing, and queues up work for the Tab Payment Processor. It's similar in concept to the Bulk Message Processor in that it's responsible for queuing up work items for another component

    • Tab Authorization Sweeper - just like the Tab Payment Sweeper, the Tab Authorization Sweeper looks for tabs that need to be authorized and queues up work for the Tab Authorization Processor

  • Functions

    • Client Logs Dispatcher - our client devices are responsible for pushing up their own zipped-up, JSON formatted log files to Azure Blob Storage. The Client Logs Dispatcher then takes the logs and dispatches them to our logging system, which is Log Analytics, part of Azure Monitor

    • Server Logs Dispatcher - similar in concept to the Client Logs Dispatcher, the Server Logs Dispatcher is responsible for taking server-side logs, which initially get placed into Azure Table Storage, and pushing them to Log Analytics so we have both client and server logs in the same place. This allows us to do end to end queries and analysis

    • Data Exporter - whenever a user requests an esport of data, we handle this via the Data Exporter. For large events, an export could take some time. We don’t want to tie up request threads or hammer the database, so we built the Data Exporter to take care of this in the background

    • Tab Recalculator - we maintain a tab for each attendee at an event, it's essentially the summary of all of their purchases they've made at the event. From time to time, changes happen that require us to recalculate some or all tabs for an event. For example, let's say the event organizer realized that beer was supposed to be $6, but was accidentally set for $7 and wanted to fix this going forward, and for all previous orders. This means we need to recalculate all tabs that have orders involving the affected products. For a large event there could be many thousands of tabs affected by this change, and since each tab has unique characteristics, including the rules around how totals should be calculated, this has to be done individually for each tab. The Tab Recalculator takes care of this work in the background

    • Tags Deduplicator - FastBar is a complicated distributed system that support offline operation of client devices like the Registration and POS apps. On the server side, we also process things in parallel in the background. Long story short, these two characteristics mean that sometimes data can get out of sync. The Tags Deduplicator helps put some things back in sync so eventually arrive at a consistent state

Azure Functions vs WebJobs

So, how come some things are implemented as Functions and some as WebJobs? Quite simply, the WebJobs were built before Azure Functions existed and/or before Azure Functions really became a thing.

Nowadays, it seems as though Azure Functions are the preferred technology to use so we made the decision a while ago to create any new background components using Functions, and, if any significant refactoring was required to a WebJob, we'll take the opportunity to move it over to a Function as well.

Over time, we plan on phasing out WebJobs in favor of Functions.

Communication Between the Web App / API and Background Components

This is almost exclusively done via Azure Storage Queues. The only exception is the Client Logs Dispatcher, which can also be triggered by a file showing up in Blob Storage.

Azure has a number of queuing solutions that could be used here. Storage queues is a simple solution that does what we need, so we use it.

Communication with 3rd Party Services

Wherever we can, we’ll push interaction with 3rd party services to background components. This way, if 3rd party services are running slowly or down completely for a period of time, we minimize impact on our system.

Blob and Table Storage

We utilize Blog storage in a couple of different ways:

  • Client apps upload their logs directly to blog storage for processing bys the Client Logs Dispatcher

  • Client apps have a feature that allows the user to create a bug report and attach local state. The bug is logged directly into our work item tracking system, Pivotal Tracker. We also package up the client side state and upload it to blob storage. This allows developers to re-create the state on a client device on their own device, or the simulator for debugging purposes

Table storage is used for the initial step in our server-side logging. We log to Table storage, and then push that data up to Log Analytics via the Server Logs Dispatcher.

Azure SQL

Even though there are a lot of different technologies to store data these days, we use Azure SQL for a few key reasons: it’s familiar, it works, it’s a good choice for a financial system like FastBar where data is relational and we require ACID semantics.

Conclusion

That’s a brief overview of FastBar’s technical architecture. In future posts, I’ll go over more of the why behind the architectural choices and the key benefits that it has.

Choosing a Tech Stack for Your Startup - Part 3: Cloud Stacks, Evolving the System and Lessons Learnt

This is the final part in a 3 part series around choosing a tech stack for your startup:

  • In part 1, we explored the choices we made and the evolution of FastBar’s client apps

  • In part 2, we started the exploration of the server side, including our technology choices and philosophy when it came to building out our MVP

  • In part 3, this post, we’ll wrap-up our discussion of the server side choices and summarize our key lessons learnt.

As a recap, here’s the areas of the system we’re focused on we’re focused on:

FastBar Components - Server Hilighted.png

And in part 2, we left off discussing self-hosting vs utilizing the cloud. TL;DR - forget about self hosting and leverage the cloud.

Next step - let’s pick a cloud provider…

AWS vs Azure

In 2014, AWS was the undisputed leader in the cloud, but Azure was quickly catching up in capabilities and feature set.

Through the accelerator we got into, 9 Mile Labs, we had access to some free credits with both AWS and Azure.

I decided to go with Azure, in part because they offered more free credits via their Bizpark Plus program than what AWS was offering us, in part because I was more familiar with their technology than that of AWS, in part because I'm generally a fan of Microsoft technology, and in part because I wanted to take advantage of their Platform as a Service (PaaS) offerings. Specifically, Azure App Service Web Apps and Azure SQL - AWS didn't have any direct equivalents for those at the time. I could certainly spin up VMs and install my own versions of IIS and SQL on AWS, but that was more work for me, and I had enough things to do.

PaaS vs IaaS

After doing some investigation into Azure's PaaS offerings, namely App Service Web Apps and Azure SQL, I decided to give them a go.

With PaaS offerings, you're trading some flexibility for convenience.

For example, with Web Apps, you don’t deploy your app to a machine or a specific VM - you deploy it to Azure's Web app service, and it deploys it to one or more VMs on your behalf. You don’t remote desktop into the VM to poke around - you use the web-based tools or APIs that Microsoft provides for you. Azure SQL doesn't support all of the features that regular SQL does, but it supports most of them. You don’t have the ability to configure where your database and log files will be placed, Azure manages that for you. In most cases, this is a good thing, as you've got better things to do.

With Web Apps, you can easily setup auto-scaling, like I described in part 2, and Azure will magically create or destroy more VMs according to the rules you setup, and route traffic between them. With SQL Azure, you can do cool things like create read-only replicas and geo-redundant failover databases within minutes:

If there is a PaaS offering of a particular piece of infrastructure that you require on whatever cloud you're using, try it out. You'll be giving up some flexibility, but you'll get a whole lot in return. For most scenarios, it will be totally worth it.

3rd Party Technologies

Stripe

The first 3rd party technology service we integrated was Stripe - FastBar is a payment system after all, so we needed a way to vault credit cards and do payment processing. At the time Stripe was the gold standard in terms of developer friendly payment APIs, so we went with it and still use it to this day. We've had our fair share of issues with Stripe, but overall it's worked well for us.

Loggly: A Cautionary Tale

Another piece of 3rd party tech we used early on was Loggly. This is essentially “logging as a service” and instead of you having to figure out how to ingest, process and search large volumes of log data, Loggly provides a cloud-based service for you.

We used this for a couple of years and eventually moved off it because we found the performance was not reliable.

We ran into an indecent one time where Loggly, which typically would respond to our requests in 200-300ms, was taking 90-120 seconds to respond (ouch!). Some of our server-side web and API code called Loggly directly as part of the request execution path (a big no-no, that was our bad) and needless to say, when your request thread is tied up waiting for a network call that is going to take 90-120 seconds, everything went to hell.

During the incident, it was tough for us to figure out what was going on, since our logging was impacted. After the incident, we analyzed and eventually tracked down 90-120 second response times from Loggly as the cause. We made changes to our system so that we would never again call Loggly directly as part of a request's execution path, rather we'd log everything "locally" within the Azure environment and have a background process that would push it up to Loggly. This is really what we should have been doing from the beginning. At the same time, Loggly should have been more robust.

This made us immune to any future slowdowns on the Loggly side, but over time we still found that our background process was often having trouble sending data to Loggly. We had an aot-retry mechanism setup so we’d keep retrying to send to Loggly until we succeeded. Eventually this would work, but we found this retry mechanism was bring triggered way too often for our liking. We also found similar issues on our client apps, where we'd have our client apps send logs directly to Loggly in the background to avoid us having to send to our server, then to Loggly. This was more of an issue, since clients operate in constrained bandwidth environments.

Overall, we experienced lots of flakiness with Loggly regardless of if we were communicating with it from the client or server.

In addition, the cheaper tiers of Loggly are quite limited in the amount of data you can send to them. For a large event, we'd quickly hit the data cap, and the rest of our logs would be dropped. This made the Loggly search features (which were awesome by the way, and one of the key things that attracted us to Loggly) pretty much useless for us, since we'd only have a fraction of our data available unless we moved up to a significantly more expensive tier.

We removed Loggly from the equation in favor of Azure's Log Analytics (now renamed to Azure Monitor). It's inside Azure with the rest of our stuff, has awesome query capabilities (on par with Loggly) and it’s much cheaper for us due to its “cloud-based pricing model” that scales based on the amount you use it, as opposed to handful of main pricing buckets with Loggly.

Twilio

We use Twilio for sending SMS messages. Twilio has worked great for us from the early days, and we don’t have any plans to change it anytime soon.

Cloudinary

On a previous project, I got deep into the complexities of image processing: uploading, cropping, resizing and hosting, distributing to a CDN etc…

TL;DR it's something that seems really simple on the surface, but quickly spirals out of control - it’s a hard problem to solve properly. 

I learnt my lesson on a previous project, and on FastBar, I did not pass Go and did not collect $200, rather I went straight to Cloudinary. It's a great product, easy to use, and it removes all of our image processing and hosting hassles.

Mailgun

Turns out sending email is hard. That’s why companies like Mailgun and Sendgrid exist.

We decided to go with Mailgun since it had a better pricing model for our purposes compared to Sendgrid. But fundamentally, they’re both pretty similar. They help you take care of the complexities of sending reliable email so you don’t have to deal with it.

Building out the Event Control Center

As our client apps and their underlying APIs started to mature, we started turning our development focus to building out the Event Control Center on the server - the place where event organizers and FastBar staff could fully configure all aspects of the event, manage settings, configure products and menus, view reports etc…

This was essentially a traditional web app. We looked at using tech like React or Angular. As we speced out our screens, we realized that our requirements were pretty straightforward. We didn't have lots of pages that needed a rich UI, we didn’t have a need for a Single Page App (SPA), and overall, our pages were pretty simple. We decided to go with a more "traditional" request/response ASP.NET web app, using HTML 5, JS, CSS, Bootstrap, Jquery etc…

The first features we deployed were around basic reporting, followed by the ability to create events, edit settings for the event, view and manage attendee tabs, manage refunds, create and configure products and menu items.

Nowadays, we've added multi user support, tax support, comprehensive reporting and export, direct and bulk SMS capabilities, configuration of promotions (ie discounts), device management, attendee surveys and much more.

The days of managing via SQL Management Studio are well in the past (thankfully!).

Re-building the public facing website

For a long time, the public facing section of the website was a simple 1-pager explanation of FastBar. It was long overdue for a refresh, so in 2018 I set out to rebuild it, improve the visuals, and most importantly, update the content to better reflect what we had to offer customers.

For this, we considered a variety of options, including: custom building an ASP.NET site, Wordpress, Wix, Squarespace etc...

Building a custom ASP.NET website was kind of a hassle. Our pages were simple, but it would be highly beneficial if non-developers could easily edit content, so we really needed a basic CRM. This meant we needed to go for a self-hosted CRM, like Wordpress, or a hosted CRM, like Wordpress.com, Wix or Squarespace.

I had build and deployed enough basic Wordpress sites to know that I didn't want to spend our time, effort and money on self-hosting it. Self-hosting means having to deal with constant updates to the platform and the plugins (Wordpress is a ripe target for hackers, so keeping everything up to date is critical), managing backups and the like.

We were busy enough building features for FastBar system, I didn’t want to allocate precious dev resources to the public facing website when a hosted solution at $12/mo (or thereabouts) would be sufficient.

For Wordpress in general, I found it tough to find a good quality template that matches the visuals I was looking for. To be clear, there are a ton of templates available, I'd say way too many. I found it really hard to hunt through the sea of mediocrity to find something I really liked.

When evaluating the hosted offerings like Squarespace and Wix, my first concern was that as a technology company I was worried potential engineering hires might judge us for using something like that. I don’t know about you, but I'll often F12 or Ctrl-U a website to see what's going on under the hood :) Also, while quick to spin up, hosted offerings like Squarespace lacked what I consider basic features, like version control, so that was a big red flag.

Eventually I determined that the pros and simplicity of a hosted offering outweighed the cons and we went with Squarespace. Within about a week, we had the site re-designed and live - the vast majority of that time was spent on the marketing and messaging, the implementation part was really easy.

Where we're at today

Today, our backend is comprised of 3 main components: the Core Web App and API, our public facing website and 3rd party services that we depend on.

Our core Web App and API is built in ASP.NET and WebAPI and runs on Azure. We leverage Azure App Services, Azure SQL, Azure Storage service (Blob, Table and Queue), Azure Monitor (Application Insights and Log Analytics), Azure Functions, WebJobs, Redis and a few other bits and pieces.

The public facing website runs on Squarespace. 

The 3rd party services we utilize are Stripe, Cloudinary, Twilio and Mailgun.

Lessons Learnt

Looking back at our previous lessons learnt from client side development:

  1. Optimize for Productivity

  2. Choose Something Popular

  3. Choose the Simplest Thing That Works

  4. Favor Cross Platform Tech

The first 3 are highly applicable to the server side development. The 4th is more client specific. You could make the argument that it’s valuable server-side as well, it depends on how many server environments you’re planning on deploying to. In most cases, you’re going to pick a stack and stick with it, so it’s less relevant.

Here are some additional lessons we learnt on the server side.

Ruthlessly Prioritize

This one applies to both client and server side development. As a startup, it's important to ruthlessly prioritize your development work and tackle the most important items first. How far can you go without building out an admin UI and instead relying on SQL scripts? What's the most important thing that your customers need right now? What is the #1 feature that will help you move the product and business forward?

Prioritization is hard, especially because it's not just about the needs of the customer. You also have to balance the underlying health of the code base, the design and the architecture of the system. You need to be aware of any technical debt you're creating, and you need to be careful not to paint yourself into a corner that you might not be able to get out of later. You need to think about the future, but not get hung up on it too much that you adopt unnecessary work now that never ends up being needed later. Prioritization requires tough tradeoffs. 

Prioritization is more art than science and I think it's something that continues to evolve with experience. Prioritize the most important things you need right now, and do your best to balance that with future needs.

Just go Cloud

Whether you're choosing AWS, Azure, Google or something else, just go for the cloud. It's pretty much a given these days, so hopefully you don't have the urge to go to the dark side and buy and host your own servers. 

Save yourself the hassle, the time and the money and utilize the cloud. Take advantage of the thousands upon thousands of developers working at Amazon, Microsoft and Google who are working to make your life easier and use the cloud.

Speaking of using the cloud…

Favor PaaS over IaaS

If there is a PaaS solution available that meets you needs, favor it over IaaS. Sure, you'll lose some control, but you'll gain so much in terms of ease of use and advanced capabilities that would be complicated and time consuming for your to build yourself.

It means less work for you, and more time available to dedicate to more important things, so favor PaaS over IasS.

Favor Pre-Built Solutions

Better still, if there is an entire solution available to you that someone else hosts and manages, favor it.

Again, less work for you, and allows you to focus your time, energy and resources on more important problems that will provide value to your customers, so favor pre-built solutions.


Conclusion

In part 1 we discussed client side technology choices we went through when building FastBar, including our thinking around Android vs iOS, which client technology stack to use, how our various apps evolved, where we’re at today, and key lessons learnt.

In part 2 and part 3, this post, we discussed the server side, including choosing a server side stack, building out an MVP, deciding to self-host or utilize the cloud, AWS vs Azure, various other 3rd party technologies we adopted, where we’re at today and more lessons learnt, primarily related to server side development.

Hopefully you can leverage some of these lessons in building your own startup. Good luck, and go change the world!

Choosing a Tech Stack for Your Startup - Part 2: Server Side Choices and Building Your MVP

In part 1 of this series, we covered a detailed overview of how FastBar chose it's client side technology stack, how it evolved over the years, where it is today, and key lessons we learnt along the way.

In this post, part 2, we'll start exploring the server side technology choices and conclude in part 3.

FastBar Components - Server Hilighted.png

Selecting a Stack

The very first prototype version of FastBar that was built at Startup Weekend didn't have much of a server side at all. I think we had a couple of basic API endpoints built in Ruby on Rails and deployed to Heroku.

After Startup Weekend when we became serious about moving FastBar forward, building out the server side became a priority and we needed to select a tech stack.

We discussed various options: Ruby on Rails, Go, Java, PHP, Node.js and ASP.NET. I decided to go with ASP.NET MVC and C# for a few reasons:

  1. Familiarity

  2. Suitability for the job

  3. How the platform was evolving

Familiarity

.NET and C# were the platform and language that I was most familiar with. I spent 7.5 years working at Microsoft, the first 2 of which were in the C# compiler team, the next 5.5 helping customers architect and build large-scale systems on Microsoft technology. Since leaving Microsoft, I spent a lot of time using .NET for a startup, along with some consulting work. For me, .NET technology was going to be the most productive option.

In tech, there is a ton of religion, and often times (perhaps most of the time) people make decisions on technologies based on their particular flavor of religion. They believe that X is faster than Y, or A is better than B for [insert unfounded reason here].  The reality is there are many technology choices, all of which have pros and cons. So long as you're choosing a technology that is mainstream and well suited for the task at hand, you can probably be successful building your system using (almost) whatever technology stack you prefer.

Suitability for the job

It's important to select a tool that's suitable for the job you're trying to achieve. For us, our backed required a website and an API - pretty standard stuff. ASP.NET is a great platform for doing just that. Likewise, there are many other fine choices including Ruby on Rails or Node.js.

How the platform was evolving

Back in 2014, Microsoft technologies were often shunned by developers who were not familiar with .NET, because they felt that Microsoft was the evil empire, all the produced was proprietary tech and closed source code, and nothing good could possibly come from the walled garden that was Redmond. 

The reality was quite different. As early as 2006, Microsoft started a project called Codeplex as a way to share source code, and used it to publish technologies like the AJAX Control Toolkit. In October 2007, Microsoft published the source code for the entire .NET Framework, it wasn't an "open source" project per se, but rather "reference source" - it allowed developers to step into the code and see what was going on under the hood, a primary complaint related to proprietary software or closed source systems. Also in October 2007, Microsoft announced that the upcoming ASP.NET MVC project would be fully open source. In 2008, the rest of the ASP.NET technologies were also open sourced.

That trend continued, with Microsoft open sourcing more and more stuff. Fast forward to April 2014 and Microsoft made a big announcement regarding open source: the creation of the .NET foundation, and the open-sourcing of large chunks of .NET. Later that year, they open sourced even more stuff.

Fast forward again to today, now Microsoft owns Github and has made most, if not all, of .NET open source. It's pretty clear that Microsoft is "all in" on open source. Here's an interesting article on the state of Microsoft and open source as of December 2018. And if you're interested in some more background, Scott Hunter and Beth Massi have some great Medium posts that chronicle some of Microsoft's journey into open source.

Back in 2014, I was a fan .NET technology, I liked the direction it was moving in, and felt that the trend towards open sourcing more stuff would only strengthen the technology and the ecosystem. Looking back, this has proved correct.

Building the Basics

With our tech stack chosen, the first 2 things we needed to build were (a) a basic customer facing website and (b) an API with underlying business logic and data schema to support the POS. In Startup and, this is often called a MVP or Minimum Viable Product.

For the customer facing website, I built a 1-pager ASP.NET website using Bootstrap. It was simple, but looked decent enough and was mobile friendly. It really just needed to serve as a landing page with a brief explanation of FastBar and a "join our email list" form. That site actually lasted way longer than it should have :)

The more important thing we needed was an API that the client apps could talk to: first to push up order details and next to display tabs to attendees so they could keep track of their spending. Although it would have been nice to have an administrative UI so we could view and manage attendees and orders, configure products and menus, view reports etc… there was a lot of effort required to build it, and it wasn't the highest priority thing we needed to implement.

Our First UI: SQL Management Studio and Excel

For a long time, the primary interface we used to setup and manage events was SQL Management Studio. I created an Excel spreadsheet that served as a helper tool to generate SQL statements which would in turn be run in SQL Management Studio. This was definitely a rough and ready approach, and not my preferred path, but hey, as a startup with limited resources, you need to pick your battles.

Reporting was done via a somewhat complicated SQL query that would spit out tabular sales results, which I'd then copy/paste into a fancy Excel spreadsheet I'd created. The results of the copy/paste would drive a "dashboard" tab in the spreadsheet that summarized key metrics, as well as a series of other pages that would show fancy graphs of sales over time and product breakdowns.

This was all rather crude, but like I said, as a startup with limited resources, you need to pick your battles and focus on highest priority tasks first.

You see, our attendees didn’t care about how the system was configured or how reports were generated. They simply wanted to get their wristband, tap to pay for their drinks and get back to enjoying the event.

Our event organizers didn’t much care how the event was configured either, so long as the point of sale displayed the right products at the right prices, customers were charged correctly, money appeared in their bank account and they could get some kind of reporting. We took care of all of the configuration of the system on their behalf, and Excel-based reports were fine for them, in the early days.

Self-hosted vs Cloud

In 2011 some friends of mine left Microsoft to start a company. At the time, Amazon Web Services (AWS) was coming along nicely, and most forward-thinking companies and startups were looking to the cloud.

The CTO for my friend's company, let's call him "Bob" (not his real name) decided that it would be cheaper to buy the hardware himself instead of going with something cloud-based. Bob created spreadsheets that he used to justify his desire to buy server and show how over time it would be cheaper. In reality, Bob was a "build his own metal" kinda guy. Bob wanted to spend money on cool hardware and build his own servers and that's what he was comfortable with, so he found a way to justify it.

Bob spent a couple of hundred thousand dollars on servers. A few years later, all of those servers were sitting in a spare room in their office collecting dust.

Don't be like Bob.

In 2011, it didn’t make sense to buy your own servers. AWS was a great choice. Azure was early, and quite frankly pretty crap at the time. Google's App Engine existed, but I don't think anyone actually used it.

In 2014 when FastBar started, it didn’t make sense to buy your own servers. AWS was cranking along and adding new services at a furious pace, and Azure was busy catching up. Azure had moved from crap a couple of years earlier to a really solid offering by 2014.

Today, it definitely doesn't make sense to buy your own servers. Unless you're Google, or Microsoft, or Amazon, then sure, buy as many servers as you need. But for the rest of us, cloud computing is so much simpler and easier. For example, at FastBar we have a script that will deploy a fresh version of our entire FastBar environment, including:

  • Web applications, APIs and background workers across multiple servers

  • Geo-redundant SQL databases

  • Geo-redundant storage services

  • Monitoring and logging resources

  • Redis caches

There's like 20 odd components all together, and this all happens within minutes. This is something that would take days to deploy to our own servers by hand, or we would have spent weeks or months automating the deployment process.

Not only that, but if we decide we need to scale up our front end web servers, all is takes is a couple of clicks, and within minutes, we’ll be running on more powerful hardware: 

Better still, we have automatic scaling setup, so if our webservers start getting overloaded, Azure makes more of them magically appear, and when things go back to normal, the extra servers simply go away.

It's a beautiful thing, and it makes me very happy. I'm pretty sure I still have a bit of PTSD when I think about how much effort it would take to set this stuff up manually before the cloud came along.

Another argument I used to hear against the cloud from Bob was that the cloud has lots of outages (here's some recent outages from 2018), but Bob claimed his "servers have never gone down". Maybe they haven't. But they will eventually, and usually at the worst possible time.

It's true that the cloud has outages. And these days when something fails at one of the big cloud providers, it's got the potential to take out a huge portion of the internet. But the cloud providers are getting better - their systems get stronger, and they learn from their mistakes.

Personally, I'd much rather be relying on something like Azure or AWS or Google Cloud, so that when an outage occurs (note I said when, not if - all system go down at some point), there are thousands upon thousands of people tweeting/writing/blogging about it, and hundreds or maybe thousands of engineers working on fixing the problem.

Forget about buying servers, and deploy your system to the cloud. There are so many benefits - from zero up front capital expenditure, to spinning up and down infrastructure and building out globally scalable and redundant systems within minutes.

Stay tuned for part 3 where we'll explore the different cloud stacks, Platform as a Service (PaaS) vs Infrastructure as a Service (IaaS), 3rd party technologies, where FastBar is at today and key lessons learnt.

Choosing a Tech Stack for Your Startup - Part 1: Client Apps

When I started FastBar in 2014, one of the first major decisions I needed to make was: which tech stack we were going to use? If you're starting a tech company, chances are you'll need to make this decision as well. Hopefully my decision process and lessons learnt can help you as well. 

This is the first in a 2 part series - in part 1, we'll deal with the client side of our app. I'll explore the history and decision making process behind the technologies I chose, talk about where we are at today, along with lessons learnt.

 By way of quick background, FastBar provides a cashless payment system for live events - think concerts, music festivals, beer and wine events etc… When an attendee arrives at an event, they get a wristband that's connected to their credit card. At the bar, they tap to pay in less than a second, and at the end of the event we automatically close them out with their card on file and send them an electronic receipt. I've previously posted some more details on the beginning of FastBar and why I started it.

 Fundamentally, FastBar is a payment system for events and is composed of 3 key components: a Registration App, Point of Sale (POS) App and the backend website and API. Since I first started FastBar, the product has evolved significantly, and the technology stack along with it.

High Level Components

 From a high-level perspective, here's what FastBar looks like today:

FastBar - High Level Components.png
  • Registration App - this is an app used onsite at events by event staff to swipe credit cards and issue RFID (or more technically, NFC) wristbands to attendees. It's also used for a variety of other maintenance tasks required onsite at events, like adding a new wristband to a attendee's account, disabling existing wristbands, loading cash onto wristbands etc…

  • POS App - used at events by event staff to sell stuff to attendees - whether that’s beverages, food or merchandise. Event staff enter what the attendee wants to purchase, then the attendee taps their wristband against an RFID/NFC reader connected to the POS to complete their transaction

  • The backend Web App and API have 4 key areas:

    • Attendee area: this allows attendees to check their tab during and after the event, request an email receipt, and register themselves online before they arrive at the event

    • Event organizer area: otherwise known as our Event Control Center, this is used by event organizers to setup and manage all aspects of their event, including configuring products, pricing, menus and view all kinds of reports

    • Admin area: used by FastBar staff to administer the system

    • API: used by our Registration and POS Apps, along with some 3rd party systems that integrate with FastBar

In this post, we’re going to focus on the client side technologies:

FastBar Components - Client Hilighted.png

FastBar's Early Development and Client-Side Tech Choices

The first app we built was the POS app. The prototype was created at Startup Weekend, and we went with iOS primarily because one of the guys on the team had a background in iOS development and was familiar with it. Skillset dictated our choice.

Android vs iOS

After Startup Weekend, when we got serious about moving FastBar forward and turning it into a real company, we needed to decide what platform we were going to use for the POS app. We went with Objective-C on iOS primarily because iOS was the most prevalent and consistent tablet platform on the market.

At the time (mid 2014), Android commanded a higher share of the global market for tablets and mobile devices 48% vs 33% for iOS:

However, when you narrow that to just tablets, iOS was significantly in front at 71%:

Narrow that down even further to North America, the primary market we cared about, and iOS is even more dominant with nearly 77%

Incidentally, here's how the tablet market share in North America has changed in the past 5 years:

In other words, not much.

Not only was iOS dominating in market share, but it was far less fragmented. As of June 2014, there were only 7 different iOS devices, all manufactured by the same company, Apple:

  • iPad

  • iPad 2

  • iPad 3rd gen

  • iPad Mini

  • iPad 4th gen

  • iPad Air

  • iPad Mini 2

The only real difference between these devices from a development perspective was the screen size and resolution. Building apps to run on all of these devices was straightforward.

By contrast, the Android market was highly fragmented with hundreds or probably thousands of devices available from tons of different manufacturers. Trying to build an application that will work consistently across all Android devices was a lot harder.

Given iOS was dominant and consistent, this meant:

  1. Plenty of dev - There were plenty of developers around who knew how to build apps for iOS

  2. Customer familiarity - our potential customers often times already had iOS devices. It would be easier for us to convince them to use a platform they were already familiar with rather than introduce something new into their environment

  3. Large rental market - there was a large market for renting iPads, meaning we didn’t need to incur a lot of up-front capital expense, instead, we could own a handful of devices and rent the additional ones we needed for an event

  4. Lots of accessories - there was a lot of choice in accessories for iPads, like point of sale stands. Less so for Android devices

From a pricing standpoint, although it was possible to find really cheap Android tablets (around $50-80), by the time you went for something that was higher quality and similarly speced to the iPad, you were looking at a similar price point.

With the hardware decision made, we continued to evolve the first prototype and stuck with Objective-C as the language of choice. Later in 2014 Apple release Swift, so we started to build some parts of the app in Swift as well.

The First Live Event

In August 2014 we executed our first live event where we processed payments. This made use of our POS App, and to register attendees we used a web-based registration app running on a laptop with a USB connected RFID reader. We quickly realized that the web-based app running on a PC was not a good option for us. We needed to be mobile, and a phone form factor was going to be a much better solution. 

At that point we set out to build the first version of our Registration app, again using Objective-C and Swift running on iOS.

Native or Not?

In early 2015 we decided we needed 3rd app, an Attendee app, that would be used by attendees to check their tabs during the event. Since this app was attendee facing it needed to be deployed on their own devices and we had no control over what hardware they'd be using (unlike the POS and Registration apps), that meant it needed to be multi-platform. Building for Android and iOS would allow us to cover well over 90% of the market, which was good enough for us as a startup. Back at the time Windows Phone still kind of existed, but was on the decline, and quite frankly just not relevant for us.

We had a decision to make: do we go native and build separate apps for each platform, or use a use a cross-platform technology that would allow us to target both Android and iOS. Both options had their pros and cons. If we went native, we'd have full access to the underlying platform, and be able to create the best experience possible for users. However, we'd be maintaining 2 different code bases, which is tough, especially for a resourced-strapped startup. If we went cross-platform, we'd have a single code base to maintain so development would be quicker. On the flip side, cross-platform technology wasn't as evolved back then as it is now, couple that with the face that no matter what, you will never be able to create the most optimized experience using a cross-platform technology compared to native tech. By definition, you're catering for the lowest common denominator.

Betting on Xamarin

We decided that the benefits of some kind of cross-platform tech far outweighed the cons, and our requirements were fairly basic and something that could fit within the realm of what cross-platform tech was good for, ie a fairly standard "business" app which primarily required forms and data synchronization. In our case, we also needed to interact with local hardware.

We decided to give Xamarin a shot. If you're not familiar with it, Xamarin allows you to build native applications for iOS (using Xamarin.iOS) or Android (using Xamarin.Android) using C# and .NET. The cool thing about Xamarin is that if you factor your application correctly, you can re-use any code that is non-platform specific. For example, a lot of our app dealt with network communication, data synchronization to the server and data manipulation locally (ie reading from and writing to our local SQLite database). Essentially all of that could be re-used across both iOS and Android without changes. In addition, in mid-2014 Xamarin released a technology called Xamarin Forms which allows you to build UI screens in a cross platform manner. This meant we could write a screen once, and it would run on iOS and Android without us having to build separate screens for both platforms. Xamarin Forms is also smart enough to render controls in a platform specific manner, for example, when we define a date picker on the screen, it will look like a native iOS date picker when running on iOS, and a native Android date picker when running on Android.

Essentially, Xamarin allowed us to build an app that was both native and cross platform - it's kind of like the best of both worlds.

I was initially cautious of Xamarin - I'd had some previous negative experiences when using an earlier version of Xamarin a couple of years prior, but one of the guys on our team had used it recently, was familiar with it, and assured us it had evolved significantly in the last couple of years.

In addition, it would allow us to use the same language on the client apps that we were using on the backend, C# (more on that in part 2 of this blog post), and that meant it was easier for developers who'd be working on the client and server to transition between codebases given the language was the same.

Our bet on Xamarin proved to be a good one. We rolled out the Attendee app on both Android and iOS simultaneously. It took us around 20% extra time to release apps for both iOS and Android vs what it would have taken for us to build for a single platform if we were doing it natively.

First Major Rebuild for Registration and POS

Towards the end of 2015, we decided it was time for our first major re-build. We were constantly evolving the code bases for both the POS and Registration Apps, but were starting to find it was harder for us to add new features and harder to find and fix bugs. Both of these apps had evolved from prototype code, and were never designed to last as long as they had. We were in a position where were just couldn't move as fast as we wanted to.

Our experiment with Xamarin for the Attendee app had been a success, so we felt confident picking Xamarin as our primary development platform moving forward. The Registration app was the highest priority for us to rebuild, as it needed some significant changes. We decided to tackle it first, with a view to building as many re-usable components as possible for use in the POS, when we got around to rebuilding that as well.

We rolled out the first version of our new Registration app in early 2016 and continued to evolve it throughout the year, eventually sunsetting the legacy Registration app when the new one had proven itself at live events and had subsumed all features of the legacy version.

Towards the end of 2016, we started work on the fully re-built version of the POS app, which initially rolled out early 2017. Similarly, we maintained the legacy version of the POS until the re-built version was battle-tested and ready for prime time.

Side note: Sunsetting the Attendee App

Also towards the end of 2016 we decided to sunset the Attendee app. We always knew that we could never force attendees to download an app at an event - it's a poor user experience to require this, and also impractical in an event environment where connectivity could be scarce or nonexistent, ie 5000 people each trying to download and 80Mb app with little to no connectivity is a nonstarter.

The key features that an attendee needed were straightforward, so we decided to focus on just delivering those features via a mobile friendly website and SMS, which was a simple and more universally accessible solution.

Code Reuse Across Multiple Apps

We found that reusing code across both the Registration and POS app was tougher than we originally thought. It wasn't until the Registration app was re-built and we started work on the new POS app that we realized a lot of decisions we made in the implementation of the Registration app were not conducive to reuse in another app, so some refactoring was required.

Code reuse is an interesting beast - on one hand we wanted to move quickly on the development of the Registration app and didn’t want to invest too much time in over-complicating the design or over genericizing things that might need to be reused one day, or might not. That violates the YAGNI principle: Ya Aint Gunna Need It. On the other hand it doesn't make sense to duplicate logic and components in two different apps, which just leads to inconsistency from the user experience and is harder to maintain and debug . That violates the DRY principle: Don't Repeat Yourself.

It's a tough balance to find the right level. Over time, we got better at determining what should be reused across an app and how to structure that code, and what we should keep separate for each app. It's something that we're constantly monitoring as we build new features.

Side note: Microsoft's Acquisition of Xamarin

In February 2016 Microsoft acquired Xamarin - I felt that was a good sign in terms of the longevity of the technology. Since then, I think Xamarin has continued to develop nicely, and looking back, I'm happy with the bet we made on Xamarin.

Where We're at Today

Today, FastBar has 2 client apps: the Registration app:

And the POS app:

They're both built in C# using Xamarin, and both target iOS only (at the moment).

Almost all screens are built using Xamarin Forms, except for 1 screen in the POS, the main screen:

IMG_0102.PNG

(Note: it is possible to have screens built in Xamarin Forms and screen built in Xamarin iOS or Android exist side by side in the one app)

Originally, this screen was built in Xamarin Forms, but we found the responsiveness when tapping the screen wasn't quite where we wanted it to be. Sometimes, when tapping the screen in quick succession taps would be missed and those items would not end up in the shopping cart. We decided to build natively in Xamarin iOS to achieve the best possible performance. It seems slightly quicker and more responsive than it's Xamarin Forms counterpart. Having said that, I'm sure Xamarin Forms has improved, and nowadays it might be as performant as the native implementation. What we have now works well, so there is no need for us to change it.

Since both apps are quite similar, both visually and from an underlying logic standpoint and how they interact with the server, we have a high-degree of code and UI screen re-use across both of the apps.

As of the date of this post, we don’t yet target Android, although it is reasonably trivial for us to do so.

We don’t have an Attendee app any more, it’s not really necessary, although may choose to re-introduce one in the future. We may also add an Event Organizer app that will allow Event Organizers to manage their events when the event is in operation. Today they can fully manage their event from our mobile friendly website, but a app running locally would allow us to provide a better experience, especially operating at event environments with poor internet connectivity.

Lessons Learnt

Optimize for Productivity

When choosing what technology you want to use for your startup, optimize for productivity.

Let's say there are 2 technology stacks: A and B. Your team has background and experience in A and thinks it will take 12 weeks to build your minimal viable product (MVP). Your well-meaning, but technologically religious friend, let's call him Bob, says "You're crazy, why would you choose A, B is so much better. I could build it in 8 weeks using B!".

Maybe Bob is right.

Maybe B is way more productive than A.

Maybe, given 2 capable teams, building this solution out using A will take 12 weeks and B will take 8 weeks.

But probably it won't.

And it will almost certainly will take your team, familiar with A, longer than 8 weeks to build something in B, because they need to get up to speed with the tech.

When thinking about productivity, definitely factor in things like language productivity, frameworks available etc… and also be sure to factor in the experience and skill set for your team. Choose what's going to be most productive for you.

Choose Something Popular

Quite frankly, it doesn't really matter what technology stack you choose, within reason.

For the most part, a good team is going to be able to build out your solution in Objective-C or Swift (native iOS) or Java (native Android) or Xamarin or React Native or Ionic or Phone Gap (cross-platform-ish technologies). Just make sure you choose something that is popular, meaning:

  • It is used by many applications

  • You can easily find and hire developers who understand it

  • It is actively maintained

Popular frameworks have companies, or more commonly, an open source community, behind them who keep building new features, keep fixing bugs, keep plugging security holes etc… all on your behalf! You don’t want to take this burden on by yourself, you have better things to do. Choose a technology that's popular.

Choose the Simpliest Thing That Works

Follow the old acronym, KISS: Keep It Simple Stupid.

Engineers like to engineer. It's in their nature. This means it very easy for them to over-engineer a solution. Choose the simplest solution that will solve your problem, and work with it for as long as possible.

For us, we ended up sunsetting one of our apps, the Attendee app, when we realized it was not the simplest solution possible. It was extra code to maintain that we didn’t really need, it was not something we could force on users, therefore we needed a more universally accessible solution, like a mobile friendly website. Given we needed the website anyway, did we really need the app? It was unnecessary, so we killed it.

Now, we may bring it back at some point in the future if it becomes necessary or advantageous, but for the moment, the mobile web solution we have is simple, it's easy, and it works. Choose the simplest thing that works.

Favor Cross-Platform Tech

If you're building for multiple platforms, do yourself a favor and use some cross-platform tech.

If you know for certain that only care about Android and your team has experience in Java-based Android development, awesome, refer to "Optimize for Productivity" above and go for it.

Likewise, if you're only building for iOS and you've got the experience in Swift or Objective-C, good for you, fire up Xcode and get crackin'.

But if you're looking to build apps for multiple platforms, like iOS and Android, do yourself a favor and go with some kind of cross platform tech so you can get as much code reuse as possible. We went with Xamarin and it has worked well for us, but whatever you choose, you don’t want to be implementing the same features 2 different ways in 2 different platforms, dealing with 2 different sets of bugs to fix  etc… That's a nightmare, and you've got better things to do.

Also, if you think there is a high probability that you'll need to go for multiple platforms in the future, do yourself a favor and choose some cross platform tech to begin with. It will probably be just as quick to build your app as it would be to do it natively, and now you're got the added flexibility that you can add that other platform if and when you need to later on. Favor cross-platform tech.


Stay tuned for part 2 of this series where we'll explore server side technology choices…