Roboflow: making computer vision accessible to all developers
Shared by braddwyer · 811d ago · 11 comments

Hey all, we're working on Roboflow, a tool that makes computer vision accessible to all developers (even if they're not machine learning experts).

We believe computer vision is going to transform every industry; it's as groundbreaking as the PC or the Internet were. But it's still really hard to use in practice. The state of computer vision currently is akin to the Internet of the 1990s where if you wanted to sell products online you first needed to re-invent your own database and web-server. Our goal is to provide the equivalent computer vision infrastructure so developers can spend their time on their domain-specific problems instead of wrestling with the ML primitives.

We launched in January and currently you can use Roboflow to
- Host your datasets in the cloud
- Pre-process and augment images
- Convert annotation formats
- Visualize annotations and metadata
- Get a dataset health check
- Maintain dataset version history
- Share datasets with your team
- Train and export a model from our library

Soon, we'll be launching our labeling integrations at which point you will be able to go from raw images to a trained model without ever leaving Roboflow.

Feedback Request
We've been working hard on improving our onboarding flow. We'd love for you to go through our intro tutorial and let us know any places you get stuck or any suggestions for how to improve it. (bonus points if you record your screen and narrate what you're thinking as you go through it)

Additionally, we've been writing lots of content recently. If you have any feedback on our tutorials we'd love to hear it:

alexshevchenko · 811d ago

I've mentioned it before, but uploading files via s3 or scp would be nice. I just don't want to clog the 128gb of space on my poor macbook with a 30gb dataset that I have on my aws machine. Being able to scp from my aws instance to your service would help out a lot (and shouldn't be too hard to implement)

braddwyer · 811d ago

I can certainly understand the appeal, and we will likely build an integration like that someday.

An API has come up during sales calls but hasn’t yet been a dealbreaker.

Currently the upload UI Has a ton more functionality tied into it than just uploading images. For example: matching annotations and images, visually verifying that your annotations are correct, correcting annotation errors, creating thumbnails and annotation images, and selecting a train/test split.

Some of those would be able to be added to a CLI but one of our core beliefs is that CLIs are not suitable for visual data. So it’s taking a backseat for now while we focus on making the core flow as solid and easy to use as possible.

beng · 811d ago

How is this different from something like

braddwyer · 811d ago

Lobe is really neat, their launch video was amazing.

I haven’t had a chance to try it yet since they’re still in closed beta but as I understand it, their focal point is on visual model construction and training.

We are currently focused on the extract, transform, load (ETL) part of the pipeline which currently requires building in-house tools.

We think Google and Facebook are doing a great job on the training side and we don’t plan to ever compete with Tensorflow or PyTorch. We want to seamlessly interoperate with these tools people already use and make them more accessible. (Similar story on the labeling side: we don’t want to compete with Scale or LabelBox; we’re happy to integrate with both to make them easier to use.)

williem · 809d ago

1. I have done the intro tutorial a few months ago. As far as I remember, the onboarding flow is smooth and clear. It helps the user to understand the process from uploading data until data augmentation.

2. It might be better if you write something that can solve a real-world problem. Currently, the public dataset provides chess, raccoon, or synthetic dataset. I will be more interested in the post is about "Build your own people counting detector for your retail" (Something like that). It is clear that your platform is useful for experienced computer vision engineers/scientists, isn't it? Then, I think the blog should be focused on intermediate or advanced content. I think the current blog focuses on beginner computer vision engineers. However, it depends on your target users. You might want to do a survey on how your users' experience and what content they want. Currently, when I stuck on computer vision problems, what I do is googling and reading medium or toward data science or GitHub solutions.

3. The big player in the market is the research center in the industry or academic. Thus, you might want to focus on that. As, they give free for educational purposes. You might want to collaborate with one or a few universities to promote. If you can attract top Professors at UCSD, Berkeley who are experts in the computer vision and have publications in CVPR or ICCV, it might really help for your product sales. Previously, when I visited ICCV 2019 in Korea, there are many startups that exhibit their tools to be sold to academia or industry. You can see the list here (

Vasek · 810d ago

Hey! Happy to give you some feedback!

- The font size in ToS Summary modal maybe could be bigger
- Tutorial - Other pointed this already out: downloading the dataset, unzipping it and uploading felt unnecessary
- Tutorial - would be nice if I could somehow toggle the message box or go back. I accidentally clicked and the box disappeared
- Tutorial - there was a step where I could play with pre-processing options. I would find it helpful if you walked me through some of those options. I have some experience with ML/DL I'm not an ML researcher though so not sure how much I'm your typical user
- Button - please center the loading spinner -

braddwyer · 810d ago

That button... someone needs to revoke my "attention to detail" card; I don't deserve it anymore. (Pushed a fix already!)

Great feedback on the tutorial! Hopefully will have the next version of that finished and deployed next week.

adityarao · 810d ago

Hi unfortunately I have never worked on computer vision so difficult for me to give contextual feedback to you! but the blog looks great. How did you guys find time to write so many posts ha ha :) kudos! You might want to setup some navigation on the blog though, maybe tags, featured posts etc

Ghost dashboard should be able to do it out of the box

braddwyer · 810d ago

We divide and conquer. I’ve been primarily focused on building out the product and Joseph has been focused on marketing and sales (which includes content).

Because we want to introduce a new wave of developers to computer vision, content is a core part of our strategy. We actually just onboarded our first non-founder content writer this week. We’ll see how it goes! But would love to ramp up production since it’s paying dividends on user acquisition.

Great suggestion on the tags. We’ve got one for tutorials right now but we should definitely give a better way to navigate than chronologically.

omarkamali · 810d ago

Just went through the basic tutorial.

It's quite neat and streamlined. I noted a couple of points:

● The necessity to extract the chess dataset zip to reupload it back is a bit strange. I tried to drop the zip I got and then got an error. Having to extract the zip in a good location, making sure to clean it up afterwards imo is a drag on the experience at the very first stage (but I might just be bickering here).

● The onboarding coach marks disappear when clicking on the backdrop, and there's no way to bring them back. I mistakenly clicked on the backdrop twice and lost the instruction that was there. A button "Show current step" or such would have been helpful.

● I'm arguably not an ML practitioner, but I would have appreciated a link at the end of the dataset tutorial on how to train on the chess dataset. I know it's not your core business, but linking directly to an article of yours for the beginners would have been useful.

● I found the onboarding experience good, but not great. It was flat overall, with no emotional "rollercoaster" or a "Wow" moment at any point throughout. You basically need a "buildup" and a "drop" somewhere (like in electronic music :D). This could be for example taking me directly to a Colab notebook ready to take the chess dataset to do something useful.

I hope these points were useful. You've got an excellent product nevertheless! Kudos :-)

I will definitely come back to try achieving something (as discussed before with you Brad)

braddwyer · 810d ago

Thanks! Super helpful. Going to integrate that feedback in the next version of the onboarding flow.