Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 4

In part 1 of this series we covered software for productivity, chat and collaboration, audio video conferencing and file sharing. Part 2 covered password management, electronic signatures, accounting, business phone number and project management. Part 3 was all about CRM, email marketing, your website, and data management. In this, part 4, we're all about the money.


HR and Payroll

When you start to hire employees, you need to worry about things like keeping track of their personal details securely, onboarding, paying them, fling taxes, filling in copious amounts of local, state and federal paperwork, managing benefits, managing time off etc… Thankfully, HR and Payroll software has emerged to help with at least part of this.

Gusto

Gusto.png

Gusto, formerly Zenpayroll (but no relation to Zendesk), started off as a payroll solution and has since evolved into a full-fledged HR platform. It's got a ton of features, including payroll and automatically filing state and federal taxes and forms for you. Pricing starts at $39/mo base price + $6/employee/mo.

Zenefits

Zenefits.png

Similar to Gusto, Zenefits is also a full-fledged HR system. Zenefits started out focused on HR and managing employee benefits, and has since moved into payroll. Pricing starts at $14/employee/mo for HR and payroll.

My Recommendation: Gusto

Gusto 512w.png

I'm going to recommend Gusto out of familiarity here. I've used it before, and it’s pretty good. Last time I checked out Zenefits in detail, they didn’t have a built-in payroll product, but they do now. Check them both out and see which one fits your requirements better.


Payment Processing

You're going to need to collect money from people at some point, hopefully sooner rather than later :) There are a couple of different areas to consider when it comes to payment processing.

Accepting Payments on Your Website

If you're selling something via your website, you'll need an online payment processing service. There are a myriad of different choices out there, here's what I consider to be the top 2. Both of these generally require some level of development to integrate with your site.

Stripe

Stripe.png

Stripe is really the gold standard when it comes to developer friendly payment APIs. They started at a time when Paypal was king, but was a real hassle for developers to use. Stripe came along and made it easy. They also provide payment forms that you can pretty easily drop into your website with minimal development effort. There are no setup or monthly fees, you just pay 2.9% + $0.30 per transaction.

Braintree

Braintree.png

Braintree is a Stripe competitor, and these days it has a similarly developer friendly API. Paypal acquired Braintree in 2013, but they still run as an independent brand. Pricing is the same as Stripe at 2.9% + $0.30.

My Recommendation: Stripe

Stripe 512w.png

I've used Stripe for years. I've had a few hiccups every now and again, but overall, it's a solid product, and one of the world's leading online payment processors.


Accepting Payments in Person

If your business needs to accept payments in person, there are a couple of turn-key solutions to check out.

Square

Square.png

Square was one of the first companies to turn the iPhone into a payment device with their original Square Reader. They're now one of the most popular small business payment processing companies around and have expanded their product offering significantly.

If you just need to accept the occasional in-person payment, you can so that with as little as your own phone + the Square App and Reader. If you're in need of a point of sale, you can run the Square App on an iPad and couple it with the Square Stand, or even go for their dedicated hardware option, Square Register.


Paypal

Paypal.png

In response to Square Reader, Paypal launched their Paypal Here Reader and App. This is conceptually similar to Square, especially if you're using your phone, although they don't really have a competitor to the Square Stand or Square Register.


My Recommendation: Square

Square 512w.png

For the occasional in-person payment, either Square of Paypal will work fine. But if your requirements move beyond that and you're looking for an actual POS, Square's software is more full-featured and they have better hardware offerings, like Square Stand and Square Register.


Paying Vendors Within the US

At some point you're going to need to pay other vendors that you work with.

Credit Cards

Always opt to pay vendors with a credit card if you can. I'll dive into more detail on the top credit cards I like to use from a business perspective another time, but there are 2 key benefits to paying via a credit card:

  1. You get points, which you can use to offset business costs for flights, hotels or just about anything else depending on your card

  2. You don't have to pay for usually around 15-45 days depending on your card and when you make a purchase during your billing cycle. This helps with your cashflow

Just make sure you pay it off at the end of every month to avoid interest charges and fees!

Checks

Surprisingly enough, you'll still encounter some vendors that don't take credit card, and where another method of electronic transfer is not possible or practical. It's 2019, but not everyone has got that memo apparently.

Paypal

If your vendor can't take a payment with a credit card, Paypal is also a good option for smaller electronic transfers. Most people will have a Paypal account, and if they don’t already (unusual), it's quick and easy to setup.

Whatever Electronic Transfer Method Your Bank Offers

In Australia, where I'm from, there is payment system called BPAY that pretty much all medium-large vendors are on. It allows you to quickly make a payment for free from your bank account. I'm surprised there isn't an equivalent in the US. For any vendors in Australia that don't have B-Pay, you can also really easily make a transfer from any Aussie bank to any other Aussie bank via your bank's website. This is similar in concept to ACH in the US, but more broadly accessible. For example, my bank here in the US has no ACH transfer ability on their website. I could use a 3rd party provider that would debit my bank account (via ACH), then credit the target account (also via ACH), but it's slow, cumbersome, and there is usually some kind of fee involved.

If your bank does offer ACH transfers via their website, it's a good option for an electronic transfer.

Some banks also have "bill pay" features which either end up doing an electronic transfer, or cutting a paper check and sending it to the target institution.

Also, some banks have various other transfer mechanisms like Zelle built it. You'll need to check with your bank to see what kind of electronic funds transfer features they offer.

If you're dealing with larger amounts of money, a wire transfer is the way to go. It's usually a bit cumbersome to setup depending on your bank and will typically cost you between $10-40, but you can transfer large amounts of money and in some cases it can be done the same day.

My Recommendation: Credit Card

Credit card payment should be your go to - for the points, better cash flow, and purchase protection via your credit card company incase you need it.

Next, after credit card, Paypal followed by ACH if you bank offers it.

Wire transfer is the go to for larger transfer amounts where the vendor doesn't want to take a credit card, or wants to charge you a payment processing fee.


Paying Vendors Internationally

If you're working with international vendors, you need to figure out a quick and easy way to pay them. You've got a few options.

Credit Card

Again, credit cards will be the go to, but many times this will be a non-starter for international vendors and suppliers. For example, factories in China will rarely accept a credit card and you usually can't use a credit card to pay your dev team in India or your virtual assistant in the Philippines.

Paypal

Paypal is fairly ubiquitous and does offer cross-border payments, however, the transfer fees + currency conversion fees are pretty substantial, and can often equate to around 5% of the amount you're sending.

Wire Transfer From Your Bank

Wire transfers can be cumbersome to setup, but once you're past that hurdle, you should be able to transfer money pretty quickly (usually 1-4 days) internationally.

Most suppliers in China will request a TT or "Telegraphic Transfer", aka wire transfer. TT is what it's commonly knowns as in China.

Sometimes you can get them to accept credit card via Paypal or Alibaba (especially for smaller orders), but in my experience they almost always want a TT payment. If you do get them to accept a credit card payment, they'll usually pass the payment processing cost onto you. For example, with a $1500 payment, you've be looking at usually around 3% or so in fees, $1500 x 3% = $45 fee.

Wire transfer will typically be cheaper for you at $10-40.

There are 2 key disadvantages to doing a wire transfer:

  1. If the vendor doesn't deliver, you have little recourse. If you use Paypal or a credit card, you've got someone that you can complain to (ie Paypal or your credit card company) and have a good chance of getting your money back

  2. You're missing out on those points!

Remitly

Remitly.png

Remitly offers a pretty good product for paying international vendors. If you're paying staff overseas, this is a great option. The way it works is pretty straightforward - you connect up your US-based bank account, debit card or credit card, and then make transfers to your overseas vendors.

Remitly started just offering payments to India, but they've since expanded to include a bunch of countries, and keep adding more regularly. I would expect you'll eventually be able to use Remitly to transfer money anywhere.

Their fees and exchange rates are pretty reasonable.

My Recommendation: It Depends

This one doesn’t have a single recommendation, and the answer is highly dependent on who you’re paying and where:

  • If you can pay with a credit card, do it

  • If you're paying international vendors for amounts over $1500, and they won't let you pay with credit card, you can try Paypal. If that isn’t an option, a wire transfer is usually your best choice

  • If you're paying international team members, and they're in a country that works with Remitly, it’s a solid option

 

Conclusion

Building a startup is hard. There's lots of stuff you need to manage, and lots of different tools out there to help you get the job done. However, figuring out what tools to use can be a full time job in and of itself. Hopefully this post can help save you a couple of weeks of trial and error and get back to the business of building your startup. Here’s a summary of my recommendations:

  • Productivity Suite (email, word processing etc…): Office 365

    • Or G Suite if you prefer, both are solid

  • Chat and collaboration - Free version of Slack, but when you have to pay, go for Teams / Hangouts (depending on your Productivity Suite choice)

  • Audio and video conferencing

    • Within your team: Teams / Hangouts (depending on your Productivity Suite choice)

    • Externally with Customers and Partners: Zoom

  • File Sharing: Dropbox (if you want to spend the money), or OneDrive for Business / Google Drive (depending on your Productivity Suite choice)

  • Password Management: LastPass

  • Electronic Signature Service: Docusign

  • Accounting: Quickbooks Online

  • Virtual Phone Number: Google Voice free, then Grasshopper

  • General Project Management: Asana

  • Software Project Management: Pivotal Tracker

  • Customer Relationship Management: Pipedrive

    • Unless your requirements are super simple, then Hubspot free version is also good

  • Customer Support: Zendesk

  • Email Marketing: Mailchimp

  • Website: Squarespace

  • Build Your Own Database: Airtable

  • HR and Payroll: Gusto

  • Payment Processing

    • Accepting Payments on Your Website: Stripe

    • Accepting Payments in Person: Square

    • Paying Vendors in the US: Credit Cards wherever possible (then ACH or Wire transfer, depending on the amount)

    • Paying Vendors Internationally: depend on where your vendors are, but look at Credit Cards, Paypal, Remitly or wire transfers

Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 3

In part 1 of this series, we covered software for productivity, chat and collaboration, audio video conferencing and file sharing. Part 2 covered password management, electronic signatures, accounting, business phone number and project management. Onward to part 3.

Customer Relationship Management (CRM)

As soon as you start working with customers, you're going to need a way to keep track of conversations, deals and todo items. A Customer Relationship Management or CRM system will help you do that. CRM is another one of those areas where there are so many choices it can easily be overwhelming. I've narrowed it down to 2 choices here to consider, plus one to avoid.

Pipedrive

Pipedrive.png

I first starting using Pipedrive many years ago when it was a pretty young product. It's a CRM that was initially targeted towards small businesses, and has since expanded to be applicable for larger companies as well. It offers an easy way to keep track of deals by visualizing your sales process like a pipeline, you can also keep track of all your customers, interactions, action items etc… I think they paved the way for a simple and easy to use CRM for small businesses, they're basically the opposite of Salesforce. Pricing starts at $12.50/user/mo.

HubSpot

Hubspot CRM.png

HubSpot offers a CRM that is pretty comparable to Pipedrive. The UI is remarkably similar, some would say heavily "inspired" by Pipedrive :) The key advantage with HubSpot CRM is that all the core features are free, which is really nice. If you want more advanced features like emails templates, automatic email sequences and better reporting, you're in for some serious sticker shock though, as you'll need to jump to their Professional plan, which is $400/mo when paid annually, or $500/mo when paid month to month. This includes 5 users. If you want these features but you've only got 2 users, you're out of luck, and will be stuck paying for at least 5 users.

Salesforce

SalesForce.png

Salesforce is heavily used in enterprise sales teams across the world.

It's big, it's complex, it's expensive, and it has a ton of features. Useful for big companies, but not the kind of thing that you need for your startup. I’d avoid it.

My Recommendation: Pipedrive, unless you just need the basics

Piepdrive 512w.jpg

If you're ok with a basic set of features for your CRM and want it for free, HubSpot is a pretty compelling offering. However, if your requirements start to get a bit more complex and you find yourself drifting into their paid plans, you'll find probably better value elsewhere.

My preference is Pipedrive, starting at $12.50/mo. If you find yourself needing more features, there is the Advanced version at $25/mo and Professional at $50/mo (half the cost of Hubspot on a per user basis) and Enterprise at $100/mo (the same at Hubspot per user). There is also no requirement to buy at least 5 users as with Hubspot, so it's pricing structure is a lot more flexible.

Note, CRM is one of those things where if you start with one product, you're less likely to change to another product later on because it can be difficult to move all of your historical data over, so it's important to factor that into your decision. Fortunately, there are a lot of services out there that can help you migrate between CRM systems, for example if you start with HubSpot and want to migrate to Pipedrive later, you can do that quite easily.

Customer Support

While CRM is more geared towards the sales process, once you've got customers, you need to be able to accept and manage inbound support tickets and setup a knowledgebase of help articles. This is where a customer support system comes in. Since this is a tangential area to CRM, some companies offer both features (at an additional cost, of course).

HubSpot

Hubspot Service.png

Recently, HubSpot introduced a support ticket system HubSpot Service. I've played around with it a bit and the interface is very similar to the HubSpot CRM tool. There is a free version available as well. If you use HubSpot CRM, the fact that they both work from the same underlying contact database is handy. What starts as free scales up quickly once you move beyond the free features to $50/user/mo for additional users as part of the Starter plan, then $400/mo for Professional plan that includes 5 users.

Zendesk

Zendesk.png

Zendesk is a full-fledged customer support system with features for ticket management, knowledge base, live chat and more. I primarily use it's ticket management features, which I like - it's much easier than trying to manage customer support conversations just via email, especially as soon as you start getting multiple people across the team involved in responding to customer inquiries. The basic plan starts at $5/user/mo with an annual commitment (or $9/user/mo if you pay monthly). Like Hubspot, the more features you want, the more you pay.

Zendesk has recently introduced Zendesk Sell software to compete with HubSpot in the sales space - both companies are trying to be a one stop shop for all of your sales and support needs.

My Recommendation: Zendesk

Zendesk 512w.png

When it comes to support, I like Zendesk's feature set and pricing. If you're in the HubSpot camp for CRM, it would be worth checking out if their customer support features fit your needs, although in general HubSpot seems to get expensive quickly once you move out of the free tiers into their paid plans.

Email Marketing

You're going to want to keep in touch with customers, investors and other interested parties by building an email list and communicating with them periodically. You could just email them directly using your email client (eg Outlook or Gmail) and that's fine for certain scenarios, like investor updates, but it's not appropriate for things like monthly newsletters to customers. For that, you're going to need an email marketing system which will allow you to send mass marketing emails, automatically add opt-out links (which you legally need to have in there) and provide analytics on who's engaging with your emails by opening them and clicking on links.

Mailchimp

Mailchimp.png

Mailchimp is one of the most well-known tools for email marketing, widely used by small businesses. It's pretty simple to use, easy to create your email templates (or customize ones that already exist) and free for up to 2,000 contacts.

Constant Contact

Constant Contact.png

Constant Contact is a competitor to Mailchimp. Similar in concept and features, but their pricing plans are not as attractive. There is no free tier (but they do have a free trial). You'll be looking at $20/mo to get started.

Squarespace Email

Squarespace Email Campaigns.png

If you're using Squarespace for your website, they recently launched an integrated email marketing product, Squarespace Email Campaigns. This makes it easy to take content from your website, for example blog posts, images, products etc… and add them to email campaigns to send to your list. It's pretty basic compared to Mailchimp or Constant contact, but the tight integration with your website is an advantage. Pricing starts at $5/mo to send up to 500 emails.

My Recommendation: Mailchimp

Mailchimp Logo 512w.jpg

Mailchimp is a good product, and given it's free for the first 2000 contacts, it should take you a decent way before you need to upgrade to a paid plan.

Website

You're going to need to setup a company website. At a minimum, a 1 pager that explains what you do and allows people to contact you. I've talked in depth about the choices I made in this area with my company, FastBar.

Squarespace

Squarespace.png

An online platform for building and hosting your website, Squarespace. It's very simple and easy to use, but with that simplicity comes fairly limited control over the inner workings of your site. Most of the time, this is a good thing. Pricing starts at $12/mo.

Wix

Wix.png

Wix is Similar to Squarespace. I've played around with it myself and have friends who've built website with it. Wix has more flexibility in their templates compared to say Squarespace, but that flexibility is a double edged sword. I've seen a lot of sites built in Wix that end up looking horrible, as it's really easy to mess things up from a design perspective. Pricing for Wix starts at $13/mo.

WordPress

Wordpress.org.png



WordPress is an open source website platform gives you almost infinite flexibility in creating your site - but with flexibility comes complexity. For the ultimate control, you can download and install Wordpress on your own server (either hosted in a data center, or in the cloud). I wouldn't recommend this for most people as it just adds several more things to your to do list, for example backups, updating the platform, updating the plugins etc… You can also leverage a hosted provider, which will take away some of the work for you, but not all.

Wordpress.com.png

Finally, Wordpress.com offers a fully hosted solution of Wordpress for you, similar to Squarespace or Wix. With the fully hosted version, you'll have less flexibility, but also less complexity. Pricing for Wordpress.com starts at $5/mo, but you're probably going to need the $8/mo or $25/mo plan.

My Recommendation: Squarespace

Squarespace 512w.jpg

Squarespace is a simple solution, decently priced and easy to use for non-technical people. While you can do more complicated things with it, for the most part you're trading flexibility for convenience. Generally speaking, that's ok.


Build Your Own Database

In the course of operating your business, you're going to need to keep track of lots of other random pieces of data. It could be a list of investors and their contributions, your cap table, your company's key metrics, internal assets (eg computers, iPads etc…), inventory or a million other things. As your requirements get more complex and move beyond simple lists, you'll probably find yourself needing to create your own "mini database" to keep track of this stuff.


Excel / Sheets

Excel Google Sheets Lockup.jpg

The old trusty Excel (or Google Sheets) is your friend here, and defacto standard for storing all kinds of pieces of data that you need to keep track of. Spreadsheets are clearly the go to when you need to do any kind of number crunching. Their basic functionality is simple and easy to use, and there is a ton of power under the covers if you need it.

However, when you start to trend more towards "database" type scenarios, spreadsheets can get complicated very quickly.

Airtable

Airtable.png

Another great option if you need something that goes beyond a basic spreadsheet and more towards a database is Airtable. Airtable is basically an online spreadsheet / database hybrid. It's simple to use, and has some key advantages over Excel or Google Sheets when your needs trend more towards a database and less towards the number crunching abilities of a traditional spreadsheet.

For example, one of the things we use it for at my company, FastBar, is to keep track of inventory that we're taking or shipping to events. We've got one table with all of our inventory and equipment in it, then we create another table for each event and link across to the items in our main inventory table, so we can easily lookup items, assign them to an event and track what we send/take to an event so we can make sure it all comes back to us. It would be possible to do this with Excel, but it's much harder and quite a bit clunkier.

My Recommendation: Airtable

Airtable 512w.png

For scenarios that lean more towards creating your own simple database, check out Airtable. Of course if your scenario is more about number crunching, then you probably have no need to look beyond the trusty spreadsheet.

Originally this was going to be a 3 part series, but it got a bit out of control, so I've added a 4th part. Stay tuned.

Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 2

In part 1 of this series, we covered software for productivity, chat and collaboration, audio video conferencing and file sharing. Without further ado, let's continue.

Password Management

Keeping your company's data secure across the various SaaS tools and technologies you use is critical. At the heart of that is good password management. Some basic things you should be doing to protect yourself and your company:

  1. Ensuring that employees always use strong, unique passwords for each site

  2. Storing all passwords, whether they be for individual accounts (bound to a specific person), or shared accounts (the same account is used by multiple people in the company) in a password manager

  3. Using 2 Factor Authentication (2FA) wherever possible 

A password manager helps with the first 2 of those.

LastPass

LastPass Teams.png

Like all password managers, LastPass allows you to remember one password (ie your "last" password) and store everything else inside LastPass's secure password vault. This means you can have really long, complicated and unique passwords for each of your sites, and LastPass does the remembering so you don’t have to. It has browser plugins along with a mobile app that all sync together, so you can easily and securely access all of your passwords from wherever you are. LastPass's Team version allows individuals to keep track of their passwords, as well as setup company-wide shared folders to securely share passwords across the team. It also has a variety of different security policies you can set up, for example, ensuring that everyone must use 2FA to get into their LastPass vault. Pricing for LastPass teams starts at $4/user/mo.

 

1Password

1Password Business.png

1Password is another password manager similar to LastPass. Also $4/user/mo for their Teams product.


Dashlane

Dashlane Business.png

Dashlane is another option. Similar feature set and price point as LastPass and 1Password.

 

My Recommendation: LastPass

LastPass Logo 512w.png

All of these options are pretty similar. Out of familiarity, I'm going to recommend LastPass. But all of these are capable products and will give you the core features you need for password management across your team.

Whatever you do, you must provide a password manager for your team and insist that everyone in the company use it to store their company-related passwords.

 

Electronic Signature Service

As part of actually forming your company, you're going to need to sign a bunch of documents. And that's just the beginning. To manage all of this, you're going to want an electronic signature (sometimes calls eSignature or e-sign) service.

DocuSign

Docusign.png

DocuSign provides online electronic signature services. Pretty straightforward, decent UI. Pricing starts at $10/mo for a single user, $25/user/mo for a multi-user plan.


Adobe Sign / EchoSign

Adobe Sign.png

Way back in 2011, Adobe acquired Echosign, and has since rebranded it Adobe Sign. Similar feature set to DocuSign, and priced at $10/mo for a single user, or $30/user/mo for multiple user plans.

 

HelloSign

Hellosign.png

Recently, Dropbox acquired HelloSign. Presumably it will eventually be rolled into the main Dropbox product, but for now it remains a standalone service. HelloSign's pricing is a bit more attractive than Docusign or AdobeSign. There is a limited version for free, a Pro version for $13/mo and a Business version for $40/mo for up to 5 users - quite a bit cheaper than an equivalent plan from Adobe or Docusign.

 

My Recommendation: DocuSign

Docusign Logo 521w.png

I've used both DocuSign and Adobe Sign. Between the two of them, I prefer DocuSign, but fundamentally, they're both going to give you the same end result: electronic signatures. HelloSign has a compelling price point, so if I were starting something from scratch today, I'd check it out as well.


Accounting

It's one of the last things that you want to deal with as an entrepreneur trying to conquer the world, but it's necessary.


QuickBooks Desktop

Quickbooks Desktop.png

Do not use QuickBooks Desktop - it is the devil!

But seriously, when I first started my own company, QuickBooks was the de facto standard for small business. Every accountant is familiar with it, however, for ordinary people, I found it extremely un-intuitive and one of the ugliest pieces of software I have ever used. I haven’t touched this in years, but I assume (hope?) they have changed the UI, but regardless, a desktop version means you've got your accounting file stored locally, which makes it difficult to share with your accountant and bookkeeper. You also need to make sure it's getting backed up (which you should be doing anyway).

Steer clear of this product and go for a hosted solution.


Xero

Xero.png

Xero is an online QuickBooks alternative. It’s simpler, fully online and easy to invite other team members, bookkeepers and accountants to collaborate on your books. It is however rather limited in its feature set. If your business is simple, you'll probably be fine, but I've found problems as soon as I start getting into more complicated areas of accounting (or more accurately, when my bookkeeper or accountant tells me that Xero doesn’t support something that I would like to do, like certain kinds of reporting). Pricing starts at $9/mo, but realistically you'll be looking at the $30/mo or maybe $60/mo version. Unlike the software we've explored thus far, this is not priced per user, but just one charge for the company.

 

QuickBooks Online

Quickbooks Online.png

Long after I had adopted Xero, QuickBooks came out with QuickBooks Online, an alternative to their desktop product. Based on my prior experiences with their desktop product, I stayed well clear. However, recently, I was trying to do some reporting on expenses with Xero and my bookkeeper informed me that it was not possible with Xero, but could be done with QuickBooks online. I tried it out, and was pleasantly surprised - QB Online as a good UI, it's easy to use, and it did have the reporting ability I wanted. Pricing starts at $20/mo, but you'll probably be looking at the $40 or $70/mo version.

 

My Recommendation: QuickBooks Online

Quickbook Online Logo 512w.png

At present, I use Xero for my companies, but I think if I were starting something new today I'd probably go with QuickBooks Online (I literally never thought I'd say that). This is primarily due to the number of limitations I've run into with Xero. Something that seems simple, and should be doable, is often not. Xero has evolved over the years, but it seems to move at a snail's pace compared to other online services I use. At this point, I’d give QuickBooks Online a shot.

 

Virtual Phone Number

As a startup, you probably don’t need a full-fledged PBX phone system. However, you do need a business phone number. Even if it's just for the myriad of accounts you need to establish with various government entities like your state's department of revenue and the city in which you operate. You may also want one to publish one on your website for sales and support purposes.

 

Google Voice

Google Voice.png

If you're just looking for another inbound phone number that's not your own personal number, Google Voice has a free personal offering that works great. For example, you create a Google Voice number, and when people call the number, the call gets forwarded to your cell phone.

 

Grasshopper

Grasshopper.png

If you're looking for professional phone presence, Grasshopper might be your ticket. When someone calls your company's number, they hear something like "Welcome to Awesome Co. Press 1 for sales, 2 for support etc…". Then you can setup a variety of different extensions that forward to people's cell phones according to various rules, along with a whole bunch of other features. It's pretty capable, but kinda pricey starting at $29/mo.

 

My recommendation: Google Voice Free then Grasshopper

Logo Lockup - Google Voice Grasshopper.png

If you just need to publish a business number and don’t intend to use it much, start with Google Voice personal version which is free. When you get to the point that you need to publish a phone number on your website and handle inbound calls for sales, support etc… Grasshopper makes sense.

As a side note, if you do need a full-fledged internal phone system, there is also Google Voice for business that gives you a cloud-based phone system which is tightly integrated with G Suite. Pricing starts at $10/user/mo.

 

General Project Management

Remember that never ending list of to do items I mentioned earlier? You're going to need somewhere to keep track of all that stuff and collaborate with the rest of your team. There are more pieces of project management software that you can shake a stick at, but below are 2 that I like the best.

Note - this is not software project management, which I'll cover below.

Asana

Asana.png

Asana is a modern, easy to use, online project management system that can start simple and scale up to handle more complex scenarios. Accessible via their website or mobile apps to help keep everyone in sync. The free version allows for up to 15 people, so it will take you a long way before you need to upgrade to the paid version, which starts at $10/user/mo.

 

Trello

Trello.png

Trello is easy to use, flexible project management software with a cool UI. Conceptually pretty similar to Asana, but they both take quite a different approach. Free initially, then starting at $10/user/mo.

 

My Recommendation: Asana

Asana Logo 512w.png

Personally I like Asana for most general project management tasks, but Trello is also a worthy choice. Use what you like best.

 

Software Project Management

Managing software development is different than general project management, thus it deserves its own tool. Similar to general project management, there are so many choices in this area, it's easy to get stuck in analysis paralysis. Here's a few of the most popular products out there to help your narrow down your search.

 

Pivotal Tracker

Pivotal Tracker.png

I've been using Pivotal Tracker, an agile project management tool, for many years now, and I like it. It's straightforward to use, constantly evolving, and designed to support an agile development workflow. There is a free version for up to 3 people and 2 projects, then you’re into the paid versions starting at $12.50/month (for 5 users) or $30/mo plan (10 users). Pricing scales up from there for bigger teams.

 

Jira

Jira.png

I don’t use Jira myself, but it's a popular issue tracking tool with a lot of features. $10/mo for a team of up to 10 people, then pricing jumps up from there to $77/mo for 11 people.

 

GitHub Issues

GitHub.png

The king of source control, GitHub, has a built-in issue tracking system as well. The fact that it's built right into GitHub makes this a compelling choice if you're using GitHub for source control. Even though I use GitHub for source control, I don’t think it's project management features are as capable as Pivotal Tracker or Jira.

 

My Recommendation: Pivotal Tracker

Pivotal Tracker Logo 512w.png

Personally, I like Pivotal Tracker. I've been using it for nearly 10 years across a variety of different projects. It's simple to use and quite prescriptive leading you down an agile development path, which I think is a good thing.

 

Stay tuned for part 3 for more recommendations on tools and technology you'll need for your startup.

Entrepreneur's Toolkit: Tools and Technology for your Startup - Part 1

When first embarking on your startup journey, there is a seemingly never ending list of things to do, and everything on that list is demanding your time and attention.

In fact, before you even formally start your company by creating a C-Corp in Delaware (or whatever type of entity you're forming, and wherever you're locating it), there are a lot of things you should be doing, for example:

  • Refining your vision

  • Validating your vision and idea with potential customers

  • Building prototypes and starting work on your minimal viable product (MVP)

  • Selling your product to customers - if you've got customers who are willing to pre-pay for your product, knowing that it won't be available for X months, that’s a great sign that you're solving a real pain point

  • Building your team - or at least talking to people who you may want to be part of your team as things progress

During this process, you're going to need to start using various different pieces of tools and technology to collaborate with people, manage the development of your MVP and communicate with customers.

Once you formally start your company, you now need to deal with things like accounting, tax filing, payroll and a whole bunch more.

To help manage all of this, you're going to need tools and technology. There are a myriad of choices of SaaS based tools and technology out there. It's time consuming and overwhelming to try and sift through them all and figure out which ones you want to try, then which ones to adopt.

I've spent countless hours over the years researching, trying out, and then actively using different pieces of software across various startups. To help give you a head start, here's my list of tools that you may want to check out.

Since there is a lot to cover here, I've broken this post up into 4 parts:

  • Part 1 (this part): productivity, chat and collaboration, audio video conferencing and file sharing

  • Part 2: password management, electronic signatures, accounting, business phone number and project management

  • Part 3: customer relationship management and support, email marketing, website and building your own database

  • payment processing and a couple of other bits and pieces.

  • Part 4: HR and payroll, payment processing on your website and in person, payment vendors in the US and internationally

Productivity Suite (email, word processing etc…)

Selecting a productivity suite to give you customized email (eg @yourocmpany.com), word processing, spreadsheets, presentations etc… is going to be one of the first things you need to do. It will also drive the other choices you're going to make. Fortunately, there are really only 2 options you need to worry about. 

Microsoft Office 365

Office 365.png

Microsoft's offering in the productivity space is called Office 365. Depending on which package you choose, it includes hosted company-branded email via Exchange along with desktop and online versions of Outlook for email, Word, Excel, PowerPoint etc… On their website, you'll find 3 different plans for personal, 3 for small business and at least 4 for enterprise. It's confusing, but here's the summary: for a startup, you'll be looking at the "Business" section:

Office 365 Pricing.png

Plans start at $5/user/mo, but you'll probably want the $12.50/user/mo Business Premium version - this gets you cloud hosted company email, all the Office apps (online and desktop versions) along with a bunch of other stuff we'll discuss below.

G Suite

G Suite.png

G Suite is Google's competitor to Microsoft Office 365. It's a very capable offering and many companies use it. However, all of Google's apps are browser based. I prefer the richness of a desktop app along with its offline capabilities (I spend a lot of time on a plane). Google's Docs, Sheets, Slides offerings are good, but they're not as good as Microsoft Word, Excel and PowerPoint respectively. Google's pricing packages are a lot simpler:

G Suite Pricing.png

Starts at $6/user/mo for the Basic version, but you'll probably want the $12/user/mo Business version. 

My Recommendation: Office 365

Office 365 Logo.png

I prefer the desktop versions of Word, Excel PowerPoint to their online equivalents or Google’s offerings, so that puts me squarely in the Office 365 camp. But to be honest, either of these options are good products. Choose whatever you're most comfortable with.

Chat and Collaboration

In many ways, your decision regarding a productivity suite will influence this decision quite heavily.

Slack

Slack.png

Slack is a tool for team collaboration and instant messenger-style chat between team members. Slack rose to popularity in recent years sparking competition from Microsoft and Google (see below). Slack would claim you can replace email by using it. Personally, I don’t really buy that - no matter what you do internally, you'll need to communicate externally with customers and partners using whatever they use - which is email. I also think email is a better tool for longer form communication. Instant messaging is useful for short messages and quick responses. There is a free version, but if you need more features you'll be starting at $6.67/user/mo.

Microsoft Teams

Teams.png

Teams is Microsoft's Slack competitor. Deep ties into the Office 365 ecosystem and already included for free in various Office 365 plans.

Google Hangouts Chat

Hangouts Chat.png

Hangouts Chat is Google's Slack competitor. As you would expect, it's got deep ties into the G Suite ecosystem and already included in G Suite plans.

Skype

Skype.png

If you're looking to save money, Skype can also be used for instant messenger-style chat. It's more geared towards personal communication, so not optimized for business chat. It's ok for very small teams, and it is totally free, so there's that.

My Recommendation: Free version of Slack then Teams/Hangouts

Logo Lockup - Slack Teams Hangouts Chat.png

Slack is a very popular choice, and the free version is great for small teams. However, when you start to exceed the limits of the free tier and need to move to the paid tier, it's tough to justify $6.67/user/mo if you're already paying ~$12/user/mo for your productivity suite that includes equivalent . You'd be increasing your per user cost by 50% per month. Slack is arguably ahead of its competitors at the time of this writing, but Microsoft and Google are pouring probably hundreds of millions of dollars into catching up. Is Slack worth the extra money? Probably not.

After running into the limits of the free version of Slack, I'd move to whatever your productivity suite includes, namely Teams or Hangouts.

Audio and Video Conferencing - Internally Within Your Team

You'll need tools to do audio and video conferencing within your team.

Slack

Slack has audio and video conference ability built in, although I've found the features and quality don't match up with the below offerings. I'm sure it will improve over time.

Teams

Teams has this capability built in. In the early days, it was impossible to invite people outside your organization, for example, contractors without a @yourcompany.com email address, to join a meeting. That was rather problematic. They've fixed that now, and Teams is a good choice for meetings within the team (meaning employees and contractors). It's also included in various versions of Office 365.

Google Hangouts Meet

Hangouts Meet is G Suites audio and video conferencing offering. Included in your G Suite subscription.

Skype

If you're penny pinching, and working with a small team, you can use Skype totally for free. You will need to have each team member add the other team members into their contact lists to easily initiate calls, and everyone needs a Skype account.

My Recommendation: Teams or Hangouts

Logo Lockup - Teams Hangouts Meet.png

Go with whatever your productivity suite offers here.

Audio and Video Conferencing - Externally with Customers and Partners

Besides internal communication, you're going to need to communicate with external customers and partners. While you can technically use any of the above "internal" tools for this, they're not (yet) a great solution to the problem.

Zoom

Zoom.png

Zoom has risen to be one of the dominant video conferencing platforms in recent years. It's super simple and easy to use, and importantly, doesn't require your participants to jump through a lot of hoops to get into a meeting. Participants will need to install an app to access the meeting, but that’s free and easy to do for their PC, Mac, iOS or Android device. There is a free version limited to 40 minute meetings, so you'll probably want a paid version starting at $15/mo. You don’t want to be cut off at 40 minutes during an important customer presentation :/

My Recommendation: Zoom

Zoom Logo 512w.png

For external meetings, you can't beat the simplicity of Zoom. It's just a better user experience compared to Hangouts Meet, Teams or Skype. Hangouts Meet is pretty good, followed by Teams then Skype for this purpose. See if you like the experience included with your productivity suite, but if not, go with Zoom. 

File Sharing

This is another area heavily influenced by your choice of productivity suite.

Dropbox

Dropbox.png

Dropbox was one of the first companies to simplify file sharing and make it really easy for individuals. They started with a personal product so you could access your files from any device, and they've since expanded their offering to include features for business, both large and small. Microsoft and Google, along with a ton of other companies, also compete with them. For a team version of Dropbox, you'll be looking at $12.50/user/mo with a minimum of 3 people, so at least $37.50/mo.

OneDrive for Business

OneDrive for Business.png

OneDrive for Business is Microsoft's offering. It’s biggest selling point is that it's included for free with various Office 365 plans. However, it's actually based on SharePoint, and while SharePoint is a good product for enterprises (it has lots of enterprisey features), it's clunky. Microsoft has tried to put a nicer interface on top of SharePoint to make it look more like the consumer version of OneDrive (which is actually really good), but they are actually totally different products.

Google Drive

Google Drive.png

G Suite's file storage solution is called Drive. Similar in concept to One Drive for Business, included with G Suite.

OneDrive Personal

OneDrive Personal.png

If you're penny pinching, you can use Microsoft's OneDrive Personal product and setup shared folders to share files with your team. OneDrive Personal is simple and easy to use, but you'll be missing out on a lot management features you'll want as your team grows.

My Recommendation: Dropbox if you want to spend the money, OneDrive for Business / Google Drive otherwise

Logo Lockup - Dropbox OneDrive for Business Google Drive.png

If you are willing to spend the money, I think Dropbox has the best product in this space. But at $12.50/user/mo with a minimum of 3 users, it's pretty spendy - you're basically doubling the cost of your core productivity suite.

Is it really worth the extra? Similar to chat and collaboration with Slack, you'll need to determine if it's that much better that you are willing to spend more money on it. If you are happy with whatever is included with your productivity suite, ie OneDrive for Business or Google Drive, go for that instead.

 

Stay tuned for parts 2 and 3 for more recommendations on tools and technology you'll need for your startup.

How to Delete Old Data from an Azure Storage Table: Part 2

In part 1 of this series, we explored how to delete data from an Azure Storage Table and covered the simple implementation of the AzureTablePurger tool.

The simple implementation seemed way too slow to me, and took a long time when purging a large amount of data from a table. I figured if I could execute many requests to Table Storage in parallel, it should perform a lot quicker.

To recap, our code fundamentally does 2 things:

  1. Enumerates entities to be deleted

  2. Delete entities using batch operations on Azure Table Storage

Since there could be a lot of data to enumerate and delete, there is no reason why we can't execute these operations parallel.

Since the AzureTablePurger is implemented as a console app, we could parallelize the operation by spinning up multiple threads, or multiple Tasks, or using the Task Parallel Library to name a few ways. The latter is the preferred way to build such a system in .NET 4 and beyond as it simplifies a lot of the underlying complexity of building multi-threaded and parallel applications.

I settled on using the Producer/Consumer pattern backed by the Task Parallel Library.

Producer/Consumer Pattern

Put simply, the Producer/Consumer pattern is where you have 1 thing producing work and putting it in a queue, and another thing taking work items from that queue and actually doing the work.

There is a pretty straightforward way to implement this using the BlockingCollection in .NET.

Producer

In our case, the Producer is the thing that is querying Azure Table Storage and queuing up work:

/// <summary>
/// Data collection step (ie producer).
///
/// Collects data to process, caches it locally on disk and adds to the _partitionKeyQueue collection for the consumer
/// </summary>
private void CollectDataToProcess(int purgeRecordsOlderThanDays)
{
    var query = PartitionKeyHandler.GetTableQuery(purgeRecordsOlderThanDays);
    var continuationToken = new TableContinuationToken();

    try
    {
        // Collect data
        do
        {
            var page = TableReference.ExecuteQuerySegmented(query, continuationToken);
            var firstResultTimestamp = PartitionKeyHandler.ConvertPartitionKeyToDateTime(page.Results.First().PartitionKey);

            WriteStartingToProcessPage(page, firstResultTimestamp);

            PartitionPageAndQueueForProcessing(page.Results);

            continuationToken = page.ContinuationToken;
            // TODO: temp for testing
            // continuationToken = null;
        } while (continuationToken != null);

    }
    finally
    {
        _partitionKeyQueue.CompleteAdding();
    }
}

We start by executing a query against Azure Table storage. Like with our simple implementation, this will execute a query and obtain a page of results, with a maximum of 1000 entities. We then take the page of data and break it up into chunks for processing.

Remember, to delete entities in Table Storage, we need the PartitionKey and RowKey. Since we could be dealing with a large amount of data, it doesn't make sense to store this PartitionKey and RowKey information in memory, otherwise we'll start running out of space. Instead, I decided to cache this on disk in one file for each PartitionKey. Inside the file, we write one line for each PartitionKey + RowKey combination that we want to delete.

We also add a work item to an in-memory queue that our consumer will pull from. Here’s the code:

/// <summary>
/// Partitions up a page of data, caches locally to a temp file and adds into 2 in-memory data structures:
///
/// _partitionKeyQueue which is the queue that the consumer pulls from
///
/// _partitionKeysAlreadyQueued keeps track of what we have already queued. We need this to handle situations where
/// data in a single partition spans multiple pages. After we process a given partition as part of a page of results,
/// we write the entity keys to a temp file and queue that PartitionKey for processing. It's possible that the next page
/// of data we get will have more data for the previous partition, so we open the file we just created and write more data
/// into it. At this point we don't want to re-queue that same PartitionKey for processing, since it's already queued up.
/// </summary>
private void PartitionPageAndQueueForProcessing(List<DynamicTableEntity> pageResults)
{
    _cancellationTokenSource.Token.ThrowIfCancellationRequested();

    var partitionsFromPage = GetPartitionsFromPage(pageResults);

    foreach (var partition in partitionsFromPage)
    {
        var partitionKey = partition.First().PartitionKey;

        using (var streamWriter = GetStreamWriterForPartitionTempFile(partitionKey))
        {
            foreach (var entity in partition)
            {
                var lineToWrite = $"";

                streamWriter.WriteLine(lineToWrite);
                Interlocked.Increment(ref _globalEntityCounter);
            }
        }

        if (!_partitionKeysAlreadyQueued.Contains(partitionKey))
        {
            _partitionKeysAlreadyQueued.Add(partitionKey);
            _partitionKeyQueue.Add(partitionKey);

            Interlocked.Increment(ref _globalPartitionCounter);
            WriteProgressItemQueued();
        }
    }
}

A couple of other interesting things here:

  • The queue we're using is actually a BlockingCollection, which is an in-memory data structure that implements the Producer/Consumer pattern. When we put things in here, the Consumer side, which we’ll explore below, takes them out

  • It's possible (and in fact likely) that we'll run into situations where data having the same PartitionKey ends up spanning multiple pages of results. For example, at the end of page 5, we have 30 records with PartitionKey = 0636213283200000000, and at the beginning of page 6, we have a further 12 records with the same PartitionKey. To handle this situation, when processing page 5, we create a file 0636213283200000000.txt to cache all of the PartitionKey + RowKey combinations and add 0636213283200000000 to our queue for processing. When we process page 6, we realize we already have a cache file 0636213283200000000.txt, so we simply append to the file it. Since we have already added 0636213283200000000 to our queue for processing. We don’t want to have duplicate items in the queue, otherwise we'd end up with multiple consumers each trying to consume or process the same cache file, which doesn't make sense. Since there isn't a built-in way to prevent us from adding duplicates into a BlockingCollection, the easiest option is to simply maintain a parallel data structure (in this case a HashSet) so we can keep track of and quickly query items we've added into the queue to ensure we're not going to add the same PartitionKey into the queue twice

  • We're using Interlocked.Increment() to increment some global variables. This ensures that when we're running multi-threaded, each thread can reliably increment the counter without running into any threading issues. If you're wondering why to use Interlocked.Increment() vs lock or volatile, there is a great discussion on the matter on this Stack Overflow post.

Consumer

The consumer in this implementation is responsible for actually deleting the entities we want to delete. It is given the PartitionKey that it should be dealing with, reads the cached PartitionKey + RowKey combination from disk, and constructs batches of no more than 100 operations at a time and executes them against the Table Storage API:

/// <summary>
/// Process a specific partition.
///
/// Reads all entity keys from temp file on disk
/// </summary>
private void ProcessPartition(string partitionKeyForPartition)
{
    try
    {
        var allDataFromFile = GetDataFromPartitionTempFile(partitionKeyForPartition);

        var batchOperation = new TableBatchOperation();

        for (var index = 0; index < allDataFromFile.Length; index++)
        {
            var line = allDataFromFile[index];
            var indexOfComma = line.IndexOf(',');
            var indexOfRowKeyStart = indexOfComma + 1;
            var partitionKeyForEntity = line.Substring(0, indexOfComma);
            var rowKeyForEntity = line.Substring(indexOfRowKeyStart, line.Length - indexOfRowKeyStart);

            var entity = new DynamicTableEntity(partitionKeyForEntity, rowKeyForEntity) { ETag = "*" };

            batchOperation.Delete(entity);

            if (index % 100 == 0)
            {
                TableReference.ExecuteBatch(batchOperation);
                batchOperation = new TableBatchOperation();
            }
        }

        if (batchOperation.Count > 0)
        {
            TableReference.ExecuteBatch(batchOperation);
        }

        DeletePartitionTempFile(partitionKeyForPartition);
        _partitionKeysAlreadyQueued.Remove(partitionKeyForPartition);

        WriteProgressItemProcessed();
    }
    catch (Exception)
    {
        ConsoleHelper.WriteWithColor($"Error processing partition ", ConsoleColor.Red);
        _cancellationTokenSource.Cancel();
        throw;
    }
}

After we finish processing a partition, we remove the temp file and we remove the PartitionKey from our data structure that is keeping track of the items in the queue, namely the HashSet _partitionKeysAlreadyQueued. There is no need to remove the PartitionKey from the queue, as that already happened when this method was handed the PartitionKey.

Note:

  • This implementation is reading all of the cached data into memory at once, which is fine for the data patters I was dealing with, but we could improve this by doing a streamed read of the data to avoid reading too much information into memory

 

 

Executing both in Parallel

To string both of these together and execute them in Parallel, here's what we do:

public override void PurgeEntities(out int numEntitiesProcessed, out int numPartitionsProcessed)
{
    void CollectData()
    void ProcessData()
    {
        Parallel.ForEach(_partitionKeyQueue.GetConsumingPartitioner(), new ParallelOptions { MaxDegreeOfParallelism = MaxParallelOperations }, ProcessPartition);
    }

    _cancellationTokenSource = new CancellationTokenSource();

    Parallel.Invoke(new ParallelOptions { CancellationToken = _cancellationTokenSource.Token }, CollectData, ProcessData);

    numPartitionsProcessed = _globalPartitionCounter;
    numEntitiesProcessed = _globalEntityCounter;
}

Here we're defining 2 Local Functions, CollectData() and ProcessData(). CollectData will serially execute our Producer step and put work items in a queue. ProcessData will, in parallel, consume this data from the queue.

We then invoke both of these steps in parallel using Parallel.Invoke() which will wait for them to complete. They'll be deemed completed when the Producer has declared there is no more things it is planning to add via:

_partitionKeyQueue.CompleteAdding();

and the Consumer has finished draining the queue.

Performance

Now we're doing this in parallel, it should be quicker, right?

In theory, yes, but initially, this was not the case:

From the 2 runs above you can visually see how the behavior differs from the Simple implementation: each "." represents a queued work item, and each "o" represents a consumed work item. We're producing the work items much quicker than we can consume them (which is expected), and once our production of work items is complete, then we are strictly consuming them (also expected).

What was not expected was the timing. It's taking approximately the same time on average to delete an entity using this parallel implementation as it did with the simple implementation, so what gives?

Turns out I had forgotten an important detail - adjusting the number of network connections available by using the ServicePointManager.DefaultConnectionLimit property. By default, a console app is only allowed to have a maximum of 2 concurrent connections to a given host. In this app, the producer is typing up a connection while it is hitting the API and obtaining PartitionKeys + RowKeys of entities we want to delete. There should be many consumers running at once sending deletion requests to the API, but there is only 1 connection available for them to share. Once all of the production is complete, then we'd free up that connection and now have 2 connections available for consumers. Either way, we're seriously limiting our throughput here and defeating the purpose of our much fancier and more complicated Parallel solution which should be quicker, but instead is running at around the same pace as the Simple version.

I made this change to rectify the problem:

private const int ConnectionLimit = 32;

public ParallelTablePurger()
{
    ServicePointManager.DefaultConnectionLimit = ConnectionLimit;
}

Now, we're allowing up to 32 connections to the host. With this I saw the average execution time drop to around 5-10ms, so somewhere between 2-4x the speed of the simple implementation.

I also tried 128 and 256 maximum connections, but that actually had a negative effect. My machine was a lot less responsive while the app was running, and average deletion times per entity were in the 10-12ms range. 32 seemed to be somewhere around the sweet spot.

For the future: turning it up to 11

There are many things that can be improved here, but for now, this is good enough for what I need.

However, if we really wanted to turn this up to 11 and make it a lot quicker, a lot more efficient and something that just ran constantly as part of our infrastructure to purge old data from Azure tables, we could re-build this to run in the cloud.

For example, we could create 2 separate functions: 1 to enumerate the work, and a second one that will scale out to execute the delete commands:

1. Create an Azure Function to enumerate the work

As we query the Table Storage API to enumerate the work to be done, put the resulting PartitionKey + RowKey combination either into a text file on blob storage (similar to this current implementation), or consider putting it into Cosmos DB or Redis. Redis would probably be the quickest, and Blob Storage probably the cheapest, and has the advantage that we avoid having to spin up additional resources.

Note, if we were to deploy this Azure Function to a consumption plan, functions have a timeout of 10 minutes. We'd need to make sure that we either limit our run to less than 10 minutes in duration, or we can handle being interrupted part way through and pick up where we left the next time the function runtime calls our functio

2. Create an Azure Function to do the deletions

After picking up an item from the queue, this function would be responsible for reading the cached PartitionKey + RowKey and batching up delete operations to send to Table Storage. If we deployed this as an Azure Function on a consumption plan, it will scale out to a maximum of 200 instances, which should be plenty for us to work through the queue of delete requests in a timely manner, and so long as we're not exceeding 20,000 requests per second to the storage account, we won't hit any throttling limits there.

Async I/O

Something else we'd look to do is to make the I/O operations asynchronous. This would probably provide some benefits in our console app implementation, although I'm not sure how much difference it would really make. I decided against async I/O for the initial implementation, since it doesn't play well with Parallel.ForEach (check out this Stack Overflow post or this GitHub issue for some more background), and would require a more complex implementation to work.

With a move to the cloud and Azure Functions, I'm not sure how much benefit async I/O would have either, since we rely heavily on scaling out the work to different processes and machines via Azure Functions. Async is great for increasing throughput and scalability, but not raw performance, which is what our goal is here.

Side Note: Durable Functions

Something else which is pretty cool and could be useful here is Azure Durable functions. This could be an interesting solution to this problem as well, especially if we wanted to report back on aggregate statistics like how long it took to complete all of the work and average time to delete a specific record (ie the fan-in).

Conclusion

It turns out the Parallel solution is quicker than the Simple solution. Feel free to use this tool for your own purposes, or if you’re interested in improving it, go ahead and fork it and make a pull request. There is a lot more that can be done :)

How to Delete Old Data From an Azure Storage Table : Part 1

Have you ever tried to purge old data from an Azure table? For example, let's say you're using the table for logging, like the Azure Diagnostics Trace Listener which logs to the WADLogsTable, and you want to purge old log entries.

 There is a suggestion on Azure feedback for this feature, questions on Stack Overflow (here's one and another), questions on MSDN Forums and bugs on Github.

 There are numerous good articles that cover deletion of entities from Azure Tables.

 However, no tools are available to help.


UPDATE: I’m creating a SaaS tool that will automatically delete data from your Azure tables older than X days so you don’t need to worry about building this yourself. Interested? Sign up for the beta (limited spots available):


TL;DR: while the building blocks exist in the Azure Table storage APIs to delete entities, there are no native ways to easily bulk delete data, and no easy way to purge data older than a certain number of days. Nuking the entire table is certainly the easiest way to go, so designing your system to roll to a new table every X days or month or whatever (for example Logs201907, Logs201908 for logs generated in July 2019 and August 2019 respectively) would be my recommendation.

If that ship has sailed and you stuck in a situation where you want to purge old data from a table, like I was, I created a tool to make my life, and hopefully your life as well, a bit easier.

 The code is available here: https://github.com/brentonw/AzureTablePurger

Disclaimer: this is prototype code, I've run this successfully myself and put it through a bunch of functional tests, but at present, it doesn’t have a set of unit tests. Use at your own risk :)

How it Works

Fundamentally, this tool has 2 steps:

  1. Enumerates entities to be deleted

  2. Delete entities using batch operations on Azure Table Storage

I started off building a simple version which synchronously grabs a page of query results from the API, then breaks that page into batches of no more than 100 items, grouped by PartitionKey - a requirement for the Azure Table Storage Batch Operation API, then execute batch delete operations on Azure Table Storage.

Enumerating Which Data to Delete

Depending on your PartitionKey and RowKey structure, this might be relatively easy and efficient, or it might be painful. With Azure Table Storage, you can only query efficiently when using PartitionKey or a PartitionKey + RowKey combination. Anything else results in a full table scan which is inefficient and time consuming. There is tons of background on this in the Azure docs: Azure Storage Table Design Guide: Designing Scalable and Performant Tables.

Querying on the Timestamp column is not efficient, and will require a full table scan.

If we take a look at the WADLogsTable, we'll see data similar to this:

 

PartitionKey = 0636213283200000000
RowKey = e046cc84a5d04f3b96532ebfef4ef918___Shindigg.Azure.BackgroundWorker___Shindigg.Azure.BackgroundWorker_IN_0___0000000001652031489

 

Here's how the format breaks down:

PartitionKey = "0" + the minute of the number of ticks since 12:00:00 midnight, January 1, 0001

RowKey = this is the deployment ID + the name of the role + the name of the role instance + a unique identifier

 

Every log entry within a particular minute is bucketed into the same partition which has 2 key advantages:

  1. We can now effectively query on time

  2. Each partition shouldn't have too many records in there, since there shouldn't be that many log entries within a single minute

Here's the query we construct to enumerate the data:

public TableQuery GetTableQuery(int purgeEntitiesOlderThanDays)
{
    var maximumPartitionKeyToDelete = GetMaximumPartitionKeyToDelete(purgeEntitiesOlderThanDays);

    var query = new TableQuery()
        .Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.LessThanOrEqual, maximumPartitionKeyToDelete))
        .Select(new[] { "PartitionKey", "RowKey" });

    return query;
}

private string GetMaximumPartitionKeyToDelete(int purgeRecordsOlderThanDays)
{
    return DateTime.UtcNow.AddDays(-1 * purgeRecordsOlderThanDays).Ticks.ToString("D19");
}

The execution of the query is pretty straightforward:

var page = TableReference.ExecuteQuerySegmented(query, continuationToken);

The Azure Table Storage API limits us to 1000 records at a time. If there are more than 1000 results, the continuationToken will be set to a non-null value, which will indicate we need to make the query again with that particular continuationToken to get the next page of data from the query.

Deleting the Data

In order to make this as efficient as possible and minimize the number of delete calls we make to the Auzre Table Storage API, we want to batch things up as much as possible.

Azure Table Storage supports batch operations, but there are two caveats we need to be aware of (1) all entities must be in the same partition and (2) there can be no more than 100 items in a batch.

To achieve this, we're going to break our page of data into chunks of no more than 100 entities, grouped by PartitionKey:

protected IList<IList<DynamicTableEntity>> GetPartitionsFromPage(IList<DynamicTableEntity> page)
{
    var result = new List<IList<DynamicTableEntity>>();

    var groupByResult = page.GroupBy(x => x.PartitionKey);

    foreach (var partition in groupByResult.ToList())
    {
        var partitionAsList = partition.ToList();
        if (partitionAsList.Count > MaxBatchSize)
        {
            var chunkedPartitions = Chunk(partition, MaxBatchSize);
            foreach (var smallerPartition in chunkedPartitions)
            {
                result.Add(smallerPartition.ToList());
            }
        }
        else
        {
            result.Add(partitionAsList);
        }
    }

    return result;
}

Then, we'll iterate over each partition, construct a batch operation and execute it against the table:

foreach (var entity in partition)
{
    Trace.WriteLine(
        $"Adding entity to batch: PartitionKey=, RowKey=");
    entityCounter++;

    batchOperation.Delete(entity);
    batchCounter++;
}

Trace.WriteLine($"Added  items into batch");

partitionCounter++;

Trace.WriteLine($"Executing batch delete of  entities");
TableReference.ExecuteBatch(batchOperation);

Here's the entire processing logic:

public override void PurgeEntities(out int numEntitiesProcessed, out int numPartitionsProcessed)
{
    VerifyIsInitialized();

    var query = PartitionKeyHandler.GetTableQuery(PurgeEntitiesOlderThanDays);

    var continuationToken = new TableContinuationToken();

    int partitionCounter = 0;
    int entityCounter = 0;

    // Collect and process data
    do
    {
        var page = TableReference.ExecuteQuerySegmented(query, continuationToken);
        var firstResultTimestamp = PartitionKeyHandler.ConvertPartitionKeyToDateTime(page.Results.First().PartitionKey);

        WriteStartingToProcessPage(page, firstResultTimestamp);

        var partitions = GetPartitionsFromPage(page.ToList());

        ConsoleHelper.WriteLineWithColor($"Broke into  partitions", ConsoleColor.Gray);

        foreach (var partition in partitions)
        {
            Trace.WriteLine($"Processing partition ");

            var batchOperation = new TableBatchOperation();
            int batchCounter = 0;

            foreach (var entity in partition)
            {
                Trace.WriteLine(
                    $"Adding entity to batch: PartitionKey=, RowKey=");
                entityCounter++;

                batchOperation.Delete(entity);
                batchCounter++;
            }

            Trace.WriteLine($"Added  items into batch");

            partitionCounter++;

            Trace.WriteLine($"Executing batch delete of  entities");
            TableReference.ExecuteBatch(batchOperation);

            WriteProgressItemProcessed();
        }

        continuationToken = page.ContinuationToken;

    } while (continuationToken != null);

    numPartitionsProcessed = partitionCounter;
    numEntitiesProcessed = entityCounter;
}

Performance

I ran this a few times on a subset of data. On average it takes around 10-20ms depending on the execution run to delete each entity:

This seemed kind of slow to me, and when trying to purge a lot of data, it takes many hours to run. I figured I should be able to improve the speed dramatically by turning this into a parallel operation.

Stay tuned for part 2 of this series to dive deeper into the parallel implementation.

Conclusion for Part 1

Since Azure Table Storage has no native way to purge old data, your best best is to structure your data so that you can simply delete old tables when you no longer need them and purge data that way.

However, if you can’t do that, feel free to make use of this AzureTablePurger tool.

Reliability and Resiliency for Cloud Connected Applications

Building cloud connected applications that are reliable is hard. At the heart of building such a system is a solid architecture and focus on resiliency. We're going to explore what that means in this post.

When I first started development on FastBar a cashless payment system for events, there were a few key criteria that drove my decisions for the overall architecture of the system.

Fundamentally, FastBar is a payment system designed specifically for event environments. Instead of using cash or drink tickets or clunky old credit card machines, events use FastBar instead.

There are 2 key characteristics of the environment in which FastBar operates, and the system that provide that drive almost all underlying aspects of the technical architecture: internet connectivity sucks, and we're dealing with people's money.

Internet connectivity at an event sucks

Prior to starting FastBar, I had a side business throwing events in Seattle. We'd throw summer parties, Halloween parties, New Year's Eve parties etc… In 10 years of throwing events, I cannot recall a single event where internet worked flawlessly. Most of the time it ranged from "entirely useless" to "ok some of the time".

At an event, there are typically 2 choices for getting internet connectivity:

  1. Rely on the venue's in-house WiFi

  2. Use the cellular network, for example a hotspot

Sometimes the venue's WiFi would work great in an initial walkthrough… and then 2000 people would arrive and connectivity goes to hell. Other times it would work great in certain areas of the venue, then we tested it where we wanted to setup registration, or place a bar, only to get an all too familiar response from the venue's IT folks: "oh, we didn’t think anyone would want internet there".

Relying on hotspots was just a bad: at many indoor locations, connectivity is poor. Even if you're outdoors with great connectivity, add a couple of thousand people to that space, each of them with smartphones hungry for bandwidth so they can post to Facebook/Instagram/Snapchat, or their phone just decides now is a great time to download that latest 3Gb iOS update in the background.

No matter what, internet connectivity at event environments is fundamentally poor and unreliable. This is something that isn't true in a standard retail environment like a coffee shop or hairdresser where you'd deploy a traditional point of sale, and it would have generally reliable internet connectivity.

We're dealing with people's money

For the event, the point of sale system is one of the most critical aspects - it effects attendee's ability to buy stuff, and the events ability to sell stuff. If the point of sale is down, attendees are pissed off and the event is losing money. Nobody wants that.

Food, beverage and merchandise sales are a huge source of revenue for events. For some events, it could be their only source of revenue.

In general, money is a very sensitive topic for people. Attendees have an expectation that they are charged accurately for the things they purchase, and events expect the sales numbers they see on their dashboard are correct, both of which are very reasonable expectations.

Reliability and Resiliency

Like any complicated, distributed software system, there are many non-functional requirements that are important to create something that works. A system needs to be:

  • Available

  • Secure

  • Maintainable

  • Performant

  • Scalable

  • And of course reliable

Ultimately, our customers (the events), and their customers (the attendees), want a system that is reliable and "just works". We achieve that kind of reliability by focusing on resiliency - we expect things will fail, and design a system that will handle those failures.

This means when thinking about our client side mobile apps, we expect the following:

  • Requests we make over the internet to our servers will fail, or will be slow. This could mean we have no internet connectivity at the time and can't event attempt to make a request to the server, or we have internet, but the request failed to get to the server, or the request made it to our server but the client didn't get the response

  • A device may run out of battery in the middle of an operation

  • A user may exit the app in the middle of an operation

  • A user may force close the app in the middle of an operation

  • The local SQLite database could get corrupt

  • Our server environment may be inaccessible

  • 3rd party services our apps communicate with might be inaccessible

On the server side, we run on Azure and also depend on a handful of 3rd party services. While generally reliable, we can expect:

  • Problems connecting to our Web App or API

  • Unexpected CPU spikes on our Azure Web Apps that impact client connectivity and dramatically increase response time for requests

  • Web Apps having problems connecting to our underlying SQL Azure database or storage accounts

  • Requests to our storage account resources being throttled

  • 3rd party services that normally respond in a couple of hundred milliseconds taking 120+ seconds to respond (that one caused a whole bunch of issues that still traumatize me to this day)

We've encountered every single one of these scenarios. Sometimes it seems like almost everything that can fail, has failed at some point, usually at the most inopportune time. That's not quite true, I can mentally create some nightmare scenarios that we could potentially encounter in the future, but these days we're in great shape to withstand multiple critical failures across different parts of our system and still retain the ability to take orders at the event, and have minimal impact to attendees and event staff.

We've done this by focusing on resiliency in all parts of the system - everything from the way we architect to the details of how we make network requests and interact with 3rd party services.

Processing an Order

To illustrate how we achieve resiliency, and therefore reliability, let's take a look at an example of processing and order. Conceptually, it looks like this:

FastBar - Order Processing - Conceptual.png

The order gets created on the POS and makes an API request to send it to the server. Seems pretty easy, right?

Not quite.

Below is a highly summarized version of what actually happens when an order is placed, and how it flows through the system:

There is a lot more to it that just a simple request-response. Instead, it's a complicated series of asynchronous operations and a whole bunch of queues in between which help us provide a system that is reliable and resilient.

On the POS App

  1. The underlying Order object and associated set of OrderItems are created and persisted to our local SQLite database
  2. We create a work item and place it on a queue. In our case, we implement our own queue as a table inside the same SQLite database. Both steps 1 and 2 happen transactionally, so either all inserts succeed, or none succeed. All of this happens within milliseconds, as it's all local on the device and doesn't rely on any network connectivity. The user experience is never impacted in case of network connectivity issues
  3. We call the synchronization engine and ask it to push our changes
    1. If we're online at the time, the synchronization engine will pick up items from the queue that are available for processing, which could be just the 1 order we just created, or there could be many orders that have been queued and are waiting to be sent to the server. For example if we were offline and have just come back online. Each item will be processed in the order that it was placed on the queue, and each item involves its own set of work. In this case, we're attempting to push this order to our server via our server-side API. If the request to the server succeeds, we'll delete the work item from the queue, and update the local Order and OrderItems with some data that the server returns to us in the response. This all happens transactionally.
    2. If there is a failure at any point, for example a network error, or a server error, we'll put that item back on the queue for future processing
    3. If we're not online, the synchronization engine can't do anything, so it returns immediately, and will re-try in the future. This happens either via a timer that is syncing periodically, or after another order is created and a push is requested
    4. Whenever we make any request to the server that could update or create any data, we send a IndempotentOperationKey which the server uses to determine if the request has been processed already or not

The Server API

  1. Our Web API receives the request and processed it
    1. We make sure the user has permissions to perform this operation, and verify that we have not already processed a request with the same IdempotentOperationKey the client has supplied
    2. The incoming request is validated, and if we can, we'll create an Order and set of OrderItems and insert them into the database. At this point, our goal is to do the minimal work possible and leave the bulk of the processing to later
    3. We'll queue a work item for processing in the background

Order Processor WebJob

  1. Our Order Processor is implemented as an Azure WebJob and runs in the background, constantly looking at the queue for new work
  2. The Order Processor is responsible for the core logic when it comes to processing an order, for example, connecting the order to an attendee and their tab, applying any discounts or promotions that may be applicable for that attendee and re-calculating the attendee's tab
  3. Next, we want to notify the attendee of their purchase, typically by sending them an SMS. We queue up a work item to be handled by the Outbound SMS Processor

Outbound SMS Processor WebJob

  1. The Outbound SMS processor handles the composition and sending of SMS messages to our 3rd party service for delivery, in our case, Twilio
  2. We're done!

That's a lot of complexity for what seems like a simple thing. So why would we add all of these different components and queues? Basically, it’s necessary to have a reliable and resilient system that can handle a whole lot of failure and still keep going:

  • If our client has any kind of network issues connecting to the server

  • If our client app is killed in any way, for example, if the device runs out of battery, or if the OS decided to kill our app since we were moved to the background, or if the user force quits our app

  • If our server environment is totally unavailable

  • If our server environment is available but slow to respond, for example, due to cloud weirdness (a whole other topic), or our own inefficient database queries or any number of other reasons

  • If our server environment has transitory errors caused by problems connecting with dependent services, for example, Azure SQL or Azure storage queues returning connectivity errors

  • If our server environment has consistent errors, for example, if we pushed a new build to the server that had a bug in it

  • If 3rd party services we depend on are unavailable for any reason

  • If 3rd party services we depend on are running slow for any reason

Asynchronicity to the Max

You'll notice the above flow is highly asynchronous. Wherever we can queue something up and process it later, we will. This means we're never worried if whatever system we're talking to is operating normally or not. If it's alive and kicking, great, we'll get that work processed quickly. If not, no worries, it will process in the background at some point in the future. Under normal circumstances, you could expect an order to be created on the client and a text message received by the customer within seconds. But, it could take a lot longer if any part of the system is running slowly or down, and that's ok, since it doesn't dramatically affect the user experience, and the reliability of the system is rock solid. .

It's also worth noting that all of these operations are both logically asynchronous, and asynchronous at the code level wherever possible.

Logically asynchronous meaning instead of the POS order creation UI directly calling the server, or, on the server side, the request thread directly calling a 3rd party service to send an SMS, these operations get stored in a queue for later processing in the background. Being logically asynchronous is what gives us reliability and resiliency.

Asynchronous at the code level is different. This means that wherever possible, when we are doing any kind of I/O we utilize C#’s async programming features. It's important to note that underlying code being asynchronous doesn’t actually have anything to do with resiliency. Rather, it helps our components achieve higher throughput since they're not tying up resources like threads, network sockets, database connections, file handles etc… waiting for responses. Asynchronity at the code level is all about throughput and scalability.

Conclusion

When you're building mobile applications connected to the cloud, reliability is key. The way to achieve reliability is by focusing on resiliency. Expect that everything can and probably will fail, and design you system to handle these failures. Make sure your system is highly logically asynchronous and queue up work to be handled by background components wherever possible.

FastBar's Technical Architecture

Previously, I've discussed the start of FastBar, how the client and server technology stacks evolved and what it looks like today (read more in part 1, part 2 and part 3 of that series).

As a recap, here's what the high-level components of FastBar look like:

FastBar Components - High Level.png

Let's dive deeper.

Architecture

So, what does FastBar’s architecture look like under the hood? Glad you asked:

Client apps:

  • Registration: used to register attendees at an event, essentially connecting their credit card to a wristband and all of the associated features that are required at a live event

  • Point of Sale: used to sell stuff at events

These are both mobile apps built in Xamarin and running on iOS devices.

The server is more complicated from an architectural standpoint, and is divided into the following primary components:

  • getfastbar.com - the primary customer facing website. Built on Squarespace, this provide primarily marketing content for anyone wanting to learn about FastBar

  • app.getfastbar.com - our main web app which provides 4 key functions:

    • User section - as user, there are a few functions you can perform on FastBar, such as creating an account, adding a credit card, updating your user profile information and if you've got the permissions, creating events. This section is pretty basic

    • Attendee section - as an attendee, you can do things like pre-register for an event, view your tab, change your credit card, email yourself a receipt and adjust your tip. This is the section of the site that receives the most traffic

    • Event Control Center section - this is by far the largest section of the web app, it's where events can be fully managed: configuring details, connecting payment accounts, configuring taxes, setting up pre-registration, managing products and menus, viewing reports and downloading data and a whole lot more. This is where event organizers and FastBar staff spend the majority of their time

    • Admin section - various admin related features used by FastBar support staff. The bulk of management related to a specific event, they would do from the Event Control Center if acting on behalf of an event organizer

  • api.getfastbar.com - our API, primarily used by our own internal apps. We also open up various endpoints to some partners. We don’t make this broadly accessible publicly yet because it doesn't need to be. However, it’s something we may decide to open up more broadly in the future

The main web app and API share the same underlying core business logic, and are backed by a variety of other components, including:

  • WebJobs:

    • Bulk Message Processor - whenever we're sending a bulk message, like an email or SMS that is intended to go to many attendees, the Bulk Message Processor will be responsible for enumerating and queuing up the work. For example, if we were to send out a bulk SMS to 10,000 attendees of the event, whatever initiates this process (the web app or the API) will queue up a work item for the Bulk Message Processor that essentially says "I want to send a message to a whole bunch of people". The Bulk Message Processor will pick up the message and start enumerating 10,000 individual work items that it will queue up for processing by the Outbound SMS Processor, a downstream component. The Outbound SMS Processor will in turn pick up each work item and send out individual SMSs

    • Order Processor - whenever we ingest orders from the POS client via the API, we do the minimal amount of work possible so that we can respond quickly to the client. Essentially, we're doing some initial validation and persisting the order in the database, then queuing a work item so that the Order Processor can take care of the heavy lifting later, and requests from the client are not unnecessarily delayed. This component is very active during an event

    • Outbound Email Processor - responsible for sending an individual email, for example as the result of another component that queued up some work for it. We use Mailgun to send emails

    • Outbound Notification Processor - responsible from sending outbound push notifications. Under the covers this uses Azure Notification Hub

    • Outbound SMS Processor - responsible for sending individual SMS messages, for example a text message update to an attendee after they place an order. We send SMSs via Twilio

    • Sample Data Processor - when we need to create a sample event for demo or testing purposes, we send this work to the Sample Data Processor. This is essentially a job that a admin user may initiate from the web app, and since it could take a while, the web app will queue up a work item, then the Sample Data Processor picks it up and goes to work creating a whole bunch of test data in the background

    • Tab Authorization Processor - whenever we need to authorize someone's credit card that is connected to their tab, the Tab Authorization Processor takes care of it. For example, if attendees are pre-registering themselves for an event week before hand, we vault their credit card details securely, and only authorize their card via the Tab Authorization Processor 24 hours before the event starts

    • Tab Payment Processor - when it comes time execute payments against a tab, the Tab Payment Processor is responsible for doing the work

    • Tab Payment Sweeper - before we can process a tab's payment, that work needs to be queued. For example, after an event, all tabs get marked for processing. The Tab Payment Sweeper runs periodically, looking for any tabs that are marked for processing, and queues up work for the Tab Payment Processor. It's similar in concept to the Bulk Message Processor in that it's responsible for queuing up work items for another component

    • Tab Authorization Sweeper - just like the Tab Payment Sweeper, the Tab Authorization Sweeper looks for tabs that need to be authorized and queues up work for the Tab Authorization Processor

  • Functions

    • Client Logs Dispatcher - our client devices are responsible for pushing up their own zipped-up, JSON formatted log files to Azure Blob Storage. The Client Logs Dispatcher then takes the logs and dispatches them to our logging system, which is Log Analytics, part of Azure Monitor

    • Server Logs Dispatcher - similar in concept to the Client Logs Dispatcher, the Server Logs Dispatcher is responsible for taking server-side logs, which initially get placed into Azure Table Storage, and pushing them to Log Analytics so we have both client and server logs in the same place. This allows us to do end to end queries and analysis

    • Data Exporter - whenever a user requests an esport of data, we handle this via the Data Exporter. For large events, an export could take some time. We don’t want to tie up request threads or hammer the database, so we built the Data Exporter to take care of this in the background

    • Tab Recalculator - we maintain a tab for each attendee at an event, it's essentially the summary of all of their purchases they've made at the event. From time to time, changes happen that require us to recalculate some or all tabs for an event. For example, let's say the event organizer realized that beer was supposed to be $6, but was accidentally set for $7 and wanted to fix this going forward, and for all previous orders. This means we need to recalculate all tabs that have orders involving the affected products. For a large event there could be many thousands of tabs affected by this change, and since each tab has unique characteristics, including the rules around how totals should be calculated, this has to be done individually for each tab. The Tab Recalculator takes care of this work in the background

    • Tags Deduplicator - FastBar is a complicated distributed system that support offline operation of client devices like the Registration and POS apps. On the server side, we also process things in parallel in the background. Long story short, these two characteristics mean that sometimes data can get out of sync. The Tags Deduplicator helps put some things back in sync so eventually arrive at a consistent state

Azure Functions vs WebJobs

So, how come some things are implemented as Functions and some as WebJobs? Quite simply, the WebJobs were built before Azure Functions existed and/or before Azure Functions really became a thing.

Nowadays, it seems as though Azure Functions are the preferred technology to use so we made the decision a while ago to create any new background components using Functions, and, if any significant refactoring was required to a WebJob, we'll take the opportunity to move it over to a Function as well.

Over time, we plan on phasing out WebJobs in favor of Functions.

Communication Between the Web App / API and Background Components

This is almost exclusively done via Azure Storage Queues. The only exception is the Client Logs Dispatcher, which can also be triggered by a file showing up in Blob Storage.

Azure has a number of queuing solutions that could be used here. Storage queues is a simple solution that does what we need, so we use it.

Communication with 3rd Party Services

Wherever we can, we’ll push interaction with 3rd party services to background components. This way, if 3rd party services are running slowly or down completely for a period of time, we minimize impact on our system.

Blob and Table Storage

We utilize Blog storage in a couple of different ways:

  • Client apps upload their logs directly to blog storage for processing bys the Client Logs Dispatcher

  • Client apps have a feature that allows the user to create a bug report and attach local state. The bug is logged directly into our work item tracking system, Pivotal Tracker. We also package up the client side state and upload it to blob storage. This allows developers to re-create the state on a client device on their own device, or the simulator for debugging purposes

Table storage is used for the initial step in our server-side logging. We log to Table storage, and then push that data up to Log Analytics via the Server Logs Dispatcher.

Azure SQL

Even though there are a lot of different technologies to store data these days, we use Azure SQL for a few key reasons: it’s familiar, it works, it’s a good choice for a financial system like FastBar where data is relational and we require ACID semantics.

Conclusion

That’s a brief overview of FastBar’s technical architecture. In future posts, I’ll go over more of the why behind the architectural choices and the key benefits that it has.

Choosing a Tech Stack for Your Startup - Part 3: Cloud Stacks, Evolving the System and Lessons Learnt

This is the final part in a 3 part series around choosing a tech stack for your startup:

  • In part 1, we explored the choices we made and the evolution of FastBar’s client apps

  • In part 2, we started the exploration of the server side, including our technology choices and philosophy when it came to building out our MVP

  • In part 3, this post, we’ll wrap-up our discussion of the server side choices and summarize our key lessons learnt.

As a recap, here’s the areas of the system we’re focused on we’re focused on:

FastBar Components - Server Hilighted.png

And in part 2, we left off discussing self-hosting vs utilizing the cloud. TL;DR - forget about self hosting and leverage the cloud.

Next step - let’s pick a cloud provider…

AWS vs Azure

In 2014, AWS was the undisputed leader in the cloud, but Azure was quickly catching up in capabilities and feature set.

Through the accelerator we got into, 9 Mile Labs, we had access to some free credits with both AWS and Azure.

I decided to go with Azure, in part because they offered more free credits via their Bizpark Plus program than what AWS was offering us, in part because I was more familiar with their technology than that of AWS, in part because I'm generally a fan of Microsoft technology, and in part because I wanted to take advantage of their Platform as a Service (PaaS) offerings. Specifically, Azure App Service Web Apps and Azure SQL - AWS didn't have any direct equivalents for those at the time. I could certainly spin up VMs and install my own versions of IIS and SQL on AWS, but that was more work for me, and I had enough things to do.

PaaS vs IaaS

After doing some investigation into Azure's PaaS offerings, namely App Service Web Apps and Azure SQL, I decided to give them a go.

With PaaS offerings, you're trading some flexibility for convenience.

For example, with Web Apps, you don’t deploy your app to a machine or a specific VM - you deploy it to Azure's Web app service, and it deploys it to one or more VMs on your behalf. You don’t remote desktop into the VM to poke around - you use the web-based tools or APIs that Microsoft provides for you. Azure SQL doesn't support all of the features that regular SQL does, but it supports most of them. You don’t have the ability to configure where your database and log files will be placed, Azure manages that for you. In most cases, this is a good thing, as you've got better things to do.

With Web Apps, you can easily setup auto-scaling, like I described in part 2, and Azure will magically create or destroy more VMs according to the rules you setup, and route traffic between them. With SQL Azure, you can do cool things like create read-only replicas and geo-redundant failover databases within minutes:

If there is a PaaS offering of a particular piece of infrastructure that you require on whatever cloud you're using, try it out. You'll be giving up some flexibility, but you'll get a whole lot in return. For most scenarios, it will be totally worth it.

3rd Party Technologies

Stripe

The first 3rd party technology service we integrated was Stripe - FastBar is a payment system after all, so we needed a way to vault credit cards and do payment processing. At the time Stripe was the gold standard in terms of developer friendly payment APIs, so we went with it and still use it to this day. We've had our fair share of issues with Stripe, but overall it's worked well for us.

Loggly: A Cautionary Tale

Another piece of 3rd party tech we used early on was Loggly. This is essentially “logging as a service” and instead of you having to figure out how to ingest, process and search large volumes of log data, Loggly provides a cloud-based service for you.

We used this for a couple of years and eventually moved off it because we found the performance was not reliable.

We ran into an indecent one time where Loggly, which typically would respond to our requests in 200-300ms, was taking 90-120 seconds to respond (ouch!). Some of our server-side web and API code called Loggly directly as part of the request execution path (a big no-no, that was our bad) and needless to say, when your request thread is tied up waiting for a network call that is going to take 90-120 seconds, everything went to hell.

During the incident, it was tough for us to figure out what was going on, since our logging was impacted. After the incident, we analyzed and eventually tracked down 90-120 second response times from Loggly as the cause. We made changes to our system so that we would never again call Loggly directly as part of a request's execution path, rather we'd log everything "locally" within the Azure environment and have a background process that would push it up to Loggly. This is really what we should have been doing from the beginning. At the same time, Loggly should have been more robust.

This made us immune to any future slowdowns on the Loggly side, but over time we still found that our background process was often having trouble sending data to Loggly. We had an aot-retry mechanism setup so we’d keep retrying to send to Loggly until we succeeded. Eventually this would work, but we found this retry mechanism was bring triggered way too often for our liking. We also found similar issues on our client apps, where we'd have our client apps send logs directly to Loggly in the background to avoid us having to send to our server, then to Loggly. This was more of an issue, since clients operate in constrained bandwidth environments.

Overall, we experienced lots of flakiness with Loggly regardless of if we were communicating with it from the client or server.

In addition, the cheaper tiers of Loggly are quite limited in the amount of data you can send to them. For a large event, we'd quickly hit the data cap, and the rest of our logs would be dropped. This made the Loggly search features (which were awesome by the way, and one of the key things that attracted us to Loggly) pretty much useless for us, since we'd only have a fraction of our data available unless we moved up to a significantly more expensive tier.

We removed Loggly from the equation in favor of Azure's Log Analytics (now renamed to Azure Monitor). It's inside Azure with the rest of our stuff, has awesome query capabilities (on par with Loggly) and it’s much cheaper for us due to its “cloud-based pricing model” that scales based on the amount you use it, as opposed to handful of main pricing buckets with Loggly.

Twilio

We use Twilio for sending SMS messages. Twilio has worked great for us from the early days, and we don’t have any plans to change it anytime soon.

Cloudinary

On a previous project, I got deep into the complexities of image processing: uploading, cropping, resizing and hosting, distributing to a CDN etc…

TL;DR it's something that seems really simple on the surface, but quickly spirals out of control - it’s a hard problem to solve properly. 

I learnt my lesson on a previous project, and on FastBar, I did not pass Go and did not collect $200, rather I went straight to Cloudinary. It's a great product, easy to use, and it removes all of our image processing and hosting hassles.

Mailgun

Turns out sending email is hard. That’s why companies like Mailgun and Sendgrid exist.

We decided to go with Mailgun since it had a better pricing model for our purposes compared to Sendgrid. But fundamentally, they’re both pretty similar. They help you take care of the complexities of sending reliable email so you don’t have to deal with it.

Building out the Event Control Center

As our client apps and their underlying APIs started to mature, we started turning our development focus to building out the Event Control Center on the server - the place where event organizers and FastBar staff could fully configure all aspects of the event, manage settings, configure products and menus, view reports etc…

This was essentially a traditional web app. We looked at using tech like React or Angular. As we speced out our screens, we realized that our requirements were pretty straightforward. We didn't have lots of pages that needed a rich UI, we didn’t have a need for a Single Page App (SPA), and overall, our pages were pretty simple. We decided to go with a more "traditional" request/response ASP.NET web app, using HTML 5, JS, CSS, Bootstrap, Jquery etc…

The first features we deployed were around basic reporting, followed by the ability to create events, edit settings for the event, view and manage attendee tabs, manage refunds, create and configure products and menu items.

Nowadays, we've added multi user support, tax support, comprehensive reporting and export, direct and bulk SMS capabilities, configuration of promotions (ie discounts), device management, attendee surveys and much more.

The days of managing via SQL Management Studio are well in the past (thankfully!).

Re-building the public facing website

For a long time, the public facing section of the website was a simple 1-pager explanation of FastBar. It was long overdue for a refresh, so in 2018 I set out to rebuild it, improve the visuals, and most importantly, update the content to better reflect what we had to offer customers.

For this, we considered a variety of options, including: custom building an ASP.NET site, Wordpress, Wix, Squarespace etc...

Building a custom ASP.NET website was kind of a hassle. Our pages were simple, but it would be highly beneficial if non-developers could easily edit content, so we really needed a basic CRM. This meant we needed to go for a self-hosted CRM, like Wordpress, or a hosted CRM, like Wordpress.com, Wix or Squarespace.

I had build and deployed enough basic Wordpress sites to know that I didn't want to spend our time, effort and money on self-hosting it. Self-hosting means having to deal with constant updates to the platform and the plugins (Wordpress is a ripe target for hackers, so keeping everything up to date is critical), managing backups and the like.

We were busy enough building features for FastBar system, I didn’t want to allocate precious dev resources to the public facing website when a hosted solution at $12/mo (or thereabouts) would be sufficient.

For Wordpress in general, I found it tough to find a good quality template that matches the visuals I was looking for. To be clear, there are a ton of templates available, I'd say way too many. I found it really hard to hunt through the sea of mediocrity to find something I really liked.

When evaluating the hosted offerings like Squarespace and Wix, my first concern was that as a technology company I was worried potential engineering hires might judge us for using something like that. I don’t know about you, but I'll often F12 or Ctrl-U a website to see what's going on under the hood :) Also, while quick to spin up, hosted offerings like Squarespace lacked what I consider basic features, like version control, so that was a big red flag.

Eventually I determined that the pros and simplicity of a hosted offering outweighed the cons and we went with Squarespace. Within about a week, we had the site re-designed and live - the vast majority of that time was spent on the marketing and messaging, the implementation part was really easy.

Where we're at today

Today, our backend is comprised of 3 main components: the Core Web App and API, our public facing website and 3rd party services that we depend on.

Our core Web App and API is built in ASP.NET and WebAPI and runs on Azure. We leverage Azure App Services, Azure SQL, Azure Storage service (Blob, Table and Queue), Azure Monitor (Application Insights and Log Analytics), Azure Functions, WebJobs, Redis and a few other bits and pieces.

The public facing website runs on Squarespace. 

The 3rd party services we utilize are Stripe, Cloudinary, Twilio and Mailgun.

Lessons Learnt

Looking back at our previous lessons learnt from client side development:

  1. Optimize for Productivity

  2. Choose Something Popular

  3. Choose the Simplest Thing That Works

  4. Favor Cross Platform Tech

The first 3 are highly applicable to the server side development. The 4th is more client specific. You could make the argument that it’s valuable server-side as well, it depends on how many server environments you’re planning on deploying to. In most cases, you’re going to pick a stack and stick with it, so it’s less relevant.

Here are some additional lessons we learnt on the server side.

Ruthlessly Prioritize

This one applies to both client and server side development. As a startup, it's important to ruthlessly prioritize your development work and tackle the most important items first. How far can you go without building out an admin UI and instead relying on SQL scripts? What's the most important thing that your customers need right now? What is the #1 feature that will help you move the product and business forward?

Prioritization is hard, especially because it's not just about the needs of the customer. You also have to balance the underlying health of the code base, the design and the architecture of the system. You need to be aware of any technical debt you're creating, and you need to be careful not to paint yourself into a corner that you might not be able to get out of later. You need to think about the future, but not get hung up on it too much that you adopt unnecessary work now that never ends up being needed later. Prioritization requires tough tradeoffs. 

Prioritization is more art than science and I think it's something that continues to evolve with experience. Prioritize the most important things you need right now, and do your best to balance that with future needs.

Just go Cloud

Whether you're choosing AWS, Azure, Google or something else, just go for the cloud. It's pretty much a given these days, so hopefully you don't have the urge to go to the dark side and buy and host your own servers. 

Save yourself the hassle, the time and the money and utilize the cloud. Take advantage of the thousands upon thousands of developers working at Amazon, Microsoft and Google who are working to make your life easier and use the cloud.

Speaking of using the cloud…

Favor PaaS over IaaS

If there is a PaaS solution available that meets you needs, favor it over IaaS. Sure, you'll lose some control, but you'll gain so much in terms of ease of use and advanced capabilities that would be complicated and time consuming for your to build yourself.

It means less work for you, and more time available to dedicate to more important things, so favor PaaS over IasS.

Favor Pre-Built Solutions

Better still, if there is an entire solution available to you that someone else hosts and manages, favor it.

Again, less work for you, and allows you to focus your time, energy and resources on more important problems that will provide value to your customers, so favor pre-built solutions.


Conclusion

In part 1 we discussed client side technology choices we went through when building FastBar, including our thinking around Android vs iOS, which client technology stack to use, how our various apps evolved, where we’re at today, and key lessons learnt.

In part 2 and part 3, this post, we discussed the server side, including choosing a server side stack, building out an MVP, deciding to self-host or utilize the cloud, AWS vs Azure, various other 3rd party technologies we adopted, where we’re at today and more lessons learnt, primarily related to server side development.

Hopefully you can leverage some of these lessons in building your own startup. Good luck, and go change the world!

Choosing a Tech Stack for Your Startup - Part 2: Server Side Choices and Building Your MVP

In part 1 of this series, we covered a detailed overview of how FastBar chose it's client side technology stack, how it evolved over the years, where it is today, and key lessons we learnt along the way.

In this post, part 2, we'll start exploring the server side technology choices and conclude in part 3.

FastBar Components - Server Hilighted.png

Selecting a Stack

The very first prototype version of FastBar that was built at Startup Weekend didn't have much of a server side at all. I think we had a couple of basic API endpoints built in Ruby on Rails and deployed to Heroku.

After Startup Weekend when we became serious about moving FastBar forward, building out the server side became a priority and we needed to select a tech stack.

We discussed various options: Ruby on Rails, Go, Java, PHP, Node.js and ASP.NET. I decided to go with ASP.NET MVC and C# for a few reasons:

  1. Familiarity

  2. Suitability for the job

  3. How the platform was evolving

Familiarity

.NET and C# were the platform and language that I was most familiar with. I spent 7.5 years working at Microsoft, the first 2 of which were in the C# compiler team, the next 5.5 helping customers architect and build large-scale systems on Microsoft technology. Since leaving Microsoft, I spent a lot of time using .NET for a startup, along with some consulting work. For me, .NET technology was going to be the most productive option.

In tech, there is a ton of religion, and often times (perhaps most of the time) people make decisions on technologies based on their particular flavor of religion. They believe that X is faster than Y, or A is better than B for [insert unfounded reason here].  The reality is there are many technology choices, all of which have pros and cons. So long as you're choosing a technology that is mainstream and well suited for the task at hand, you can probably be successful building your system using (almost) whatever technology stack you prefer.

Suitability for the job

It's important to select a tool that's suitable for the job you're trying to achieve. For us, our backed required a website and an API - pretty standard stuff. ASP.NET is a great platform for doing just that. Likewise, there are many other fine choices including Ruby on Rails or Node.js.

How the platform was evolving

Back in 2014, Microsoft technologies were often shunned by developers who were not familiar with .NET, because they felt that Microsoft was the evil empire, all the produced was proprietary tech and closed source code, and nothing good could possibly come from the walled garden that was Redmond. 

The reality was quite different. As early as 2006, Microsoft started a project called Codeplex as a way to share source code, and used it to publish technologies like the AJAX Control Toolkit. In October 2007, Microsoft published the source code for the entire .NET Framework, it wasn't an "open source" project per se, but rather "reference source" - it allowed developers to step into the code and see what was going on under the hood, a primary complaint related to proprietary software or closed source systems. Also in October 2007, Microsoft announced that the upcoming ASP.NET MVC project would be fully open source. In 2008, the rest of the ASP.NET technologies were also open sourced.

That trend continued, with Microsoft open sourcing more and more stuff. Fast forward to April 2014 and Microsoft made a big announcement regarding open source: the creation of the .NET foundation, and the open-sourcing of large chunks of .NET. Later that year, they open sourced even more stuff.

Fast forward again to today, now Microsoft owns Github and has made most, if not all, of .NET open source. It's pretty clear that Microsoft is "all in" on open source. Here's an interesting article on the state of Microsoft and open source as of December 2018. And if you're interested in some more background, Scott Hunter and Beth Massi have some great Medium posts that chronicle some of Microsoft's journey into open source.

Back in 2014, I was a fan .NET technology, I liked the direction it was moving in, and felt that the trend towards open sourcing more stuff would only strengthen the technology and the ecosystem. Looking back, this has proved correct.

Building the Basics

With our tech stack chosen, the first 2 things we needed to build were (a) a basic customer facing website and (b) an API with underlying business logic and data schema to support the POS. In Startup and, this is often called a MVP or Minimum Viable Product.

For the customer facing website, I built a 1-pager ASP.NET website using Bootstrap. It was simple, but looked decent enough and was mobile friendly. It really just needed to serve as a landing page with a brief explanation of FastBar and a "join our email list" form. That site actually lasted way longer than it should have :)

The more important thing we needed was an API that the client apps could talk to: first to push up order details and next to display tabs to attendees so they could keep track of their spending. Although it would have been nice to have an administrative UI so we could view and manage attendees and orders, configure products and menus, view reports etc… there was a lot of effort required to build it, and it wasn't the highest priority thing we needed to implement.

Our First UI: SQL Management Studio and Excel

For a long time, the primary interface we used to setup and manage events was SQL Management Studio. I created an Excel spreadsheet that served as a helper tool to generate SQL statements which would in turn be run in SQL Management Studio. This was definitely a rough and ready approach, and not my preferred path, but hey, as a startup with limited resources, you need to pick your battles.

Reporting was done via a somewhat complicated SQL query that would spit out tabular sales results, which I'd then copy/paste into a fancy Excel spreadsheet I'd created. The results of the copy/paste would drive a "dashboard" tab in the spreadsheet that summarized key metrics, as well as a series of other pages that would show fancy graphs of sales over time and product breakdowns.

This was all rather crude, but like I said, as a startup with limited resources, you need to pick your battles and focus on highest priority tasks first.

You see, our attendees didn’t care about how the system was configured or how reports were generated. They simply wanted to get their wristband, tap to pay for their drinks and get back to enjoying the event.

Our event organizers didn’t much care how the event was configured either, so long as the point of sale displayed the right products at the right prices, customers were charged correctly, money appeared in their bank account and they could get some kind of reporting. We took care of all of the configuration of the system on their behalf, and Excel-based reports were fine for them, in the early days.

Self-hosted vs Cloud

In 2011 some friends of mine left Microsoft to start a company. At the time, Amazon Web Services (AWS) was coming along nicely, and most forward-thinking companies and startups were looking to the cloud.

The CTO for my friend's company, let's call him "Bob" (not his real name) decided that it would be cheaper to buy the hardware himself instead of going with something cloud-based. Bob created spreadsheets that he used to justify his desire to buy server and show how over time it would be cheaper. In reality, Bob was a "build his own metal" kinda guy. Bob wanted to spend money on cool hardware and build his own servers and that's what he was comfortable with, so he found a way to justify it.

Bob spent a couple of hundred thousand dollars on servers. A few years later, all of those servers were sitting in a spare room in their office collecting dust.

Don't be like Bob.

In 2011, it didn’t make sense to buy your own servers. AWS was a great choice. Azure was early, and quite frankly pretty crap at the time. Google's App Engine existed, but I don't think anyone actually used it.

In 2014 when FastBar started, it didn’t make sense to buy your own servers. AWS was cranking along and adding new services at a furious pace, and Azure was busy catching up. Azure had moved from crap a couple of years earlier to a really solid offering by 2014.

Today, it definitely doesn't make sense to buy your own servers. Unless you're Google, or Microsoft, or Amazon, then sure, buy as many servers as you need. But for the rest of us, cloud computing is so much simpler and easier. For example, at FastBar we have a script that will deploy a fresh version of our entire FastBar environment, including:

  • Web applications, APIs and background workers across multiple servers

  • Geo-redundant SQL databases

  • Geo-redundant storage services

  • Monitoring and logging resources

  • Redis caches

There's like 20 odd components all together, and this all happens within minutes. This is something that would take days to deploy to our own servers by hand, or we would have spent weeks or months automating the deployment process.

Not only that, but if we decide we need to scale up our front end web servers, all is takes is a couple of clicks, and within minutes, we’ll be running on more powerful hardware: 

Better still, we have automatic scaling setup, so if our webservers start getting overloaded, Azure makes more of them magically appear, and when things go back to normal, the extra servers simply go away.

It's a beautiful thing, and it makes me very happy. I'm pretty sure I still have a bit of PTSD when I think about how much effort it would take to set this stuff up manually before the cloud came along.

Another argument I used to hear against the cloud from Bob was that the cloud has lots of outages (here's some recent outages from 2018), but Bob claimed his "servers have never gone down". Maybe they haven't. But they will eventually, and usually at the worst possible time.

It's true that the cloud has outages. And these days when something fails at one of the big cloud providers, it's got the potential to take out a huge portion of the internet. But the cloud providers are getting better - their systems get stronger, and they learn from their mistakes.

Personally, I'd much rather be relying on something like Azure or AWS or Google Cloud, so that when an outage occurs (note I said when, not if - all system go down at some point), there are thousands upon thousands of people tweeting/writing/blogging about it, and hundreds or maybe thousands of engineers working on fixing the problem.

Forget about buying servers, and deploy your system to the cloud. There are so many benefits - from zero up front capital expenditure, to spinning up and down infrastructure and building out globally scalable and redundant systems within minutes.

Stay tuned for part 3 where we'll explore the different cloud stacks, Platform as a Service (PaaS) vs Infrastructure as a Service (IaaS), 3rd party technologies, where FastBar is at today and key lessons learnt.