Today I am happy to announce that the dotCloud Platform as a Service has been acquired by the US subsidiary of cloudControl GmbH, a German PaaS provider who is expanding into the United States. The dotCloud PaaS will keep its name, and as a dotCloud PaaS customer, this acquisition means you can continue to run your mission critical applications on the dotCloud PaaS backed by a great team who already have years of experience running the cloudControl PaaS service. As the Developer Support Manager for the dotCloud PaaS for the last two years, I’m very happy with this arrangement, both for the continuity it provides to existing customers and for the renewed technology investment cloudControl will make in the future of the dotCloud PaaS.

When we re-incorporated ourselves as Docker in 2013, many of the dotCloud PaaS customers wondered what would happen to their applications. We’ve kept the system running and helped our customers grow, but all the new platform engineering effort was going into Docker. Until now. For the next two months, we’ll be working closely with the cloudControl team to ensure that they know everything they need to keep the dotCloud PaaS running while at the same time launching their existing technology in the US region.

cloudControl’s PaaS technology already has many of the features you’ve asked for, including group ownership of applications (with roles), a newer version of Ubuntu, a supported REST API, more flexible logging, an add-on marketplace for third party service providers, an uptime SLA, and even premium phone support. They plan to provide early access to this enhanced PaaS starting in Q4 of this year (2014). That early access program will run in parallel with the current dotCloud PaaS, so you can evaluate the new technology but won’t be rushed to make any changes to your current application.

Our highest priority is to keep you as a customer and to make this next-generation platform something that you’re eager to use.

In the first quarter of 2015, cloudControl expects the US region to be production-ready and to start helping customers migrate to the new technology. In the second quarter of 2015, all existing dotCloud PaaS customers will be upgraded to the next-generation dotCloud PaaS. Until that final conversion, you can keep running your existing dotCloud PaaS app without changes.

So, for the next few months, nothing changes except a few new names answering support tickets. Then you’ll be able to preview the next generation dotCloud PaaS for several more months, and finally you’ll have several more months to convert your application to the new platform. I’ll be working closely with the cloudControl team for the next two months, and with the help of some of the original dotCloud engineers, we’ll do all we can to ensure you and the new dotCloud engineers (and owners) are off to a bright new start with great new features on the horizon.

I hope you’ll welcome the cloudControl team as they take the dotCloud PaaS into the future! I’m sure you’ll have questions, so we’ll be on #dotcloud on Freenode IRC and also available through support@dotcloud.comI’m arothfusz on IRC.

We’re happy to announce that Codeship, the hosted Continuous Integration and Deployment platform, has built support for Continuous Deployment to dotCloud.

With Codeship you can test your code and deploy your GitHub and Bitbucket projects. Should your tests fail, Codeship will not deploy your application. Should all your tests pass, Codeship will automatically deploy your app in a matter of minutes.

Continuous Deployment to dotCloud with Codeship

All you need to deploy to dotCloud is your API token. Within 2 minutes you can configure the Codeship to deploy your app to dotCloud.

All you have to do is

  • Retrieve your dotCloud API token from your account page
  • Fill in the API token
  • Choose a name for your application

As soon as you’ve configured your deployment, the Codeship will deploy your application to dotCloud with every build. The dotCloud command line tool gets installed during deployment and is used to push your app to dotCloud.

Have a look at the videos to see a step-by-step introduction on how to set up the Codeship. Getting started is really easy. Go ahead and give Codeship a try!

If you like this news consider sharing!

How to continuously deploy a Django app from GitHub to dotCloud

How to continuously deploy a Django app from Bitbucket to dotCloud

About Codeship

The Codeship is a hosted Continuous Integration and Continuous Deployment platform. Be sure to check out the Codeship blog and follow them on twitter. You will learn about Software Testing, Continuous Integration and Continuous Deployment.

I am thrilled to be joining dotCloud as CEO and excited to be joining the talented, passionate and rapidly growing Docker community.

I started following dotCloud in 2011, when the standard PaaS model was to offer a single stack that ran on a single provider’s infrastructure. I was impressed by dotCloud’s vision of a multi-language PaaS, which offered developers a wide variety of different stacks that worked well together. In the process, dotCloud built a great business around public PaaS.

In the past two years, however, it has become clear that the industry has a set of opportunities that even the broadest-based public PaaS can’t address. Developers want to be able to build their applications using an unlimited set of stacks, and run those aps on any available hardware in any hardware environments. Operators both inside and outside of the enterprise want to be able to run applications seamlessly. Almost every enterprise wants its own PaaS-like environment.

In other words, the industry seems to want not just a multi-language PaaS, but a limitless-language, multi-environment, and multi-enterprise PaaS.

Clearly, this is beyond the capabilities of any one organization or solution to deliver. But, an ecosystem, with the right open source technology, can deliver this.

So, I was exceptionally impressed when, in March of this year, Solomon Hykes and the dotCloud team took the bold step of releasing much of their core technology as the open source project, Docker.  I’ve spent the past three months as an advisor to the Docker project, and have been consistently amazed by both the vision of the team, and by the incredible momentum and community that has built up behind Docker. I was so impressed, that I decided to come on board full time.

This is the new dotCloud/Docker vision of what PaaS (and software deployment in general) should be:

  1. Developers build their applications using their choice of any available services
  2. An application and its dependencies are packaged into a lightweight container
  3. Containerized applications run anywhere- a laptop, a VM, an OpenStack Cluster, the public cloud—without modification or delay

With Docker, developers can finally build once and run virtually anywhere. Operators can configure once, and run virtually anything.

We think this will have huge implications for a wide variety of use cases, from developers shipping code, to continuous integration, to web scale deployment and hybrid clouds. Indeed, most of the biggest trends in IT today (hybrid clouds, scale out architecture, big data) depend on making some version of this vision work.

The community seems to agree. In a little more than four months, we’ve gotten over 4,000 github stars, 30,000 pulls, over 100 significant contributors, and have seen huge numbers of applications getting “Dockerized”. Moreover, we’ve seen some of the largest web companies start to deploy Docker inside their environments. We’ve seen over 100 derivative projects built on top of Docker. And, our community has integrated Docker into key open source ecosystem projects like Chef, Puppet, Vagrant, Jenkins, and OpenStack.

So…why am I excited? I’ve been fortunate to build businesses at four successful startups (twice as CEO). I’ve learned there are few things as rewarding as joining a great team and community, using innovative and disruptive technology, and solving wide ranging and important problems. Combined with great investors, obvious momentum, a sound existing business, and some exciting new business models, I can’t imagine a better place to be than dotCloud and Docker.

With thanks to Solomon, the team at dotCloud, and the whole community, I look forward to the road ahead!

Read the full press release here.

The new dotCloud Sandbox with Docker

As announced, the dotCloud Sandbox has been sunset and we have been working on an open-source project which replicates the dotCloud builder. This project lets you develop and host your dotCloud applications anywhere.

We are releasing it today, and the community can now build, deploy, and run the dotCloud sandbox on top of Docker. The project is named Sandbox, and you can find it on GitHub.

Sandbox takes your application (and its dotcloud.yml) as input, and outputs a Docker image for each service that can be built. The resulting Docker images can be directly started in Docker.

Sandbox supports the full build pipeline: it takes your code, unpacks it into a Docker container, installs system packages and application dependencies, configures Supervisor, and generates the environment files. It has been designed to be extensible, so you can easily add support for new service types. Moreover, since it is using Docker, you are no longer limited to Ubuntu 10.04 LTS “Lucid Lynx”: you can build your apps on top of your favorite release of Debian or Ubuntu GNU/Linux.

Note, however, that Sandbox only knows how to build and run “code services”: databases are not implemented. Unlike the dotCloud platform, Sandbox doesn’t do any kind of orchestration; it just builds and runs individual services. Sandbox doesn’t know how to generate credentials for a database and inject them in the environment of another service. This means that the development workflow with Sandbox is a bit different from what you are used to on dotCloud. Sandbox gives you a build system, but you’ll have to deploy your databases and stateful services beforehand.

As an example of how to use this sandbox, you can check out the Flask/ZeroRPC example in the Sandbox repository. Here is the screencast hosted on

dotCloud Sandbox Screencast

When compared to the dotCloud platform, Sandbox has a more limited feature set. But contributing to Sandbox is easy; and if you want to be involved, here are some possible next steps:

  • add more services (right now only python, python-worker and the custom service are supported);
  • add a mechanism to select the base image to use to build a service automatically (this would lead to support for incremental builds and –clean flag like on the dotCloud CLI).

Development happens on and on the #docker IRC channel.

About Louis OpterJerome
Louis Opter is a Platform engineer at dotCloud. He is working with us since day 1 in 2009. He’s passionate about systems programming and specialized in Python. He likes to code while listening to music and is a Vietnamese martial arts enthusiast (Tay Son Vo Dao). 

Connect with Louis on Twitter! @1opter

Mo' data, mo' problems!

An example graph of the new memory metrics.
(Click to enlarge!)

A while ago, we published a detailed blog explaining How to Optimize the Memory Usage of Your Apps. There was a strong emphasis on metrics. Because knowing the amount of used and available RAM is not enough, and doesn’t cut it when you’re trying to assess whether or not your apps need more memory.

With this in mind, we just released a new version of the dotCloud Dashboard. The new dashboard exposes more detailed memory metrics. You will now see that the memory allocated to your app is split in 4 parts: Resident Set Size, Active Page Cache, Inactive Page Cache, Unused Memory. Let’s review what they mean for your apps.

Resident Set Size

That’s essentially the memory used by processes when they malloc() or do anonymous mmap(). This memory is inelastic: it will amount to exactly what your app has been asking for, no more, no less. If your app asks for more than what is available, it will be restarted. If the memory usage was due to a leak or to the occasional odd request, restarting the app will get it back on track. However, if your app constantly needs more of this kind of memory than what is available, it will constantly be restarted, and it will appear to be unstable.

We detect out-of-memory conditions, and we report them to you: we send e-mail notifications, and we record them to display them on the dashboard. When you receive those notifications, you should take them very seriously, and scale up your app — or audit your code to reduce your memory footprint.

On the new memory graph, the resident set size is drawn in solid dark blue. It’s the baseline of your memory usage, and you should not scale your memory below that amount.

Active and Inactive Page Cache

When your app reads and write from disk, data never goes directly into the application buffers. It transits through the system’s buffer cache or page cache. It stays here for a while, so that if you request the same data again some time later, it will be available immediately, without performing actual disk I/O. Likewise, when you write something, it transits to the same buffer cache; this lets the system perform some optimizations regarding the order in which writes should be committed to disk.

The page cache is elastic: when you run out of memory, the system will happily discard it (since the cached data can be re-read anytime from the disk), or commit it to disk (in the case of cached writes). Conversely, if you havetons of memory, the system will happily retain as much as it can in the cache; which can lead to absurdly high memory uses for seemingly trivial apps. Typical example: a tiny HTTP server, handling requests for 10 MB of content, and using a few GB of page cache. How? Why? Well, because it’s also logging requests, and the log happens to be on disk. And Linux will keep the log in memory as well — if memory is available. Of course, if at some point you need the memory, Linux will free it up instantly. But meanwhile, if you look at your usage graphs, you will see the big memory usage.

On Linux, the page cache is split in two different pools: active and inactive. As the name implies, the active pool contains data that has been accessed recently, while the inactive pool contains data that is accessed less frequently. To make an informed scaling decision, it is important to understand how “active” and “inactive” really work under the hood. The memory is divided in pages, which are blocks of 4 KB. A given page of the buffer cache will start its existence (when it is loaded from the disk) as an active page. When an inactive page is accessed, it gets moved to the active pool. That part is easy! Now, when does an active page get move to the inactive pool? This doesn’t happen out of “old age” (i.e., a page being left untouched for a while). It happens when the active pool becomes bigger than the inactive pool! When there are more active pages than inactive ones, the kernel scans the active pages, and demotes a few of them to the inactive pool. Some time later, if there are still more active than inactive pages, it will do it again. It will go on until the balance is restored. However, at the same time, your app is running, and accessing memory; potentially moving inactive pages back to the active pool.

What does it mean? The bottom-line is the following: you should look at the active:inactive ratio. If this ratio is big (e.g. 200 MB of active memory vs. 20 MB of inactive memory), it means that the system is under heavy pressure. It’s constantly moving pages from active to inactive (to meet the 1:1 ratio), but the activity of your app is constantly moving pages back from inactive to active. In that case, it would be wise to scale verticaly, to achieve better I/O performance (since more data will fit in the cache). As you add more memory, the ratio will lower, and get closer to 1:1. A ratio of 1:1 (or even lower) means that the system is at equilibrium: it has moved all it could to inactive memory, and there was no strong pressure to put things back into active memory. You want to get close to this ratio (at least if you need good I/O performance).

On the new dashboard, active and inactive memory pools are shown in respectively medium-blue and light-blue shades, to highlight the fact that they are still important, but less than the (darker) resident set size.

Free Memory

Well, that one at least doesn’t deserve a long, technical explanation! If the metrics show that your app consistently has a leeway of free memory, you can definitely consider scaling down by that amount.

Warning: even if it’s often said that “free RAM is wasted RAM”, be wary of spikes! Take, for instance, a 1 GB Java app, which constantly shows 200 MB of Free Memory. Before scaling down to 800 MB, make sure that it is not experiencing occasional spikes that consumes that Free Memory! If you scale down, your app will be out of memory during the spikes, and will most likely crash. Also, remember that the long-term graphs (like the 7-days and 30-days trends) show average values; meaning that short bursts will not show up on those graphs. The metrics sample rate is 1 data point per minute; and that’s about the resolution that you can get on the 1-hour and 6-hours graphs. This means that unfortunately, short spikes (less than one minute) won’t appear on any graph.

On the new dashboard, the free memory in shown in light grey.

Putting It All Together

This is a lot of new information, but the new dashboard should make it very easy for you to figure out the appropriate vertical scaling for your application.

  • For code services, make sure that the Resident Set Size (dark blue) never maxes out the available memory. If it gets close to it, you should add more memory before you receive out-of-memory notifications. Conversely, do not hesitate to cut through the Free Memory and the Inactive Page Cache (grey and light blue areas). The Page Cache will typically be small compared to the Resident Set Size.
  • For database services (and static services), the previous rule applies as well, but the Page Cache (both Active and Inactive) will very likely be much bigger, and you will have to pay attention to that, too. As a rule of thumb, compare the Active and Inactive amounts during peak times. If Active is bigger than Inactive, your memory usage is close to being optimal. If they are equivalent (or if Inactive if larger), it means that you can scale down a little bit. This should be an iterative process: scale down, wait for memory usage to stabilize, check again, and repeat until the Active pool starts being larger.

We hope that the new dashboard can help you to make informed scaling decision, and cut down significantly on your dotCloud bill!


Dear dotCloud Customers,

We are going open-source.

It has been a wild week for dotCloud. Of course as we prepared to open-source Docker, the container technology that powers the platform, we hoped it would be well received, like ZeroRPC and Hipache before it. But nothing could have prepared us to the magnitude of the response. Now, 6 days, 50,000 visits, 1000 github follows and 300 pull requests later… we think we get the message. You want an open-source dotCloud – and we’re going to give it to you.

Today, as the first step in our new open-source strategy, we are announcing an important change to our free Sandbox. In the coming weeks we will hand it over to the community as an open-source project which can be deployed and hosted anywhere. As part of this transition we will be sunsetting our free hosting tier – see below for details. The resources freed by this transition will be re-invested in our open-source roadmap.

I want to emphasize that this transition does not affect our Live and Enterprise flavors, and it does not change our business model. Our core competency is and will continue to be the operation and support of large-scale cloud services, for tens of millions of visitors, 24 hours a day, every day. We intend to continue expanding that business, and we believe the best way to do that is by embracing open-source.

1. Going open source

Our approach to open-source is simple: solve fundamental problems, one at a time, with the simplest possible tool. The result is a collection of components which can be used separately, or combined to solve increasingly large problems.

So far dotCloud’s open-source toolbox includes:

  • ZeroRPC, a communication layer for distributed services;
  • Hipache, a routing layer for HTTP and Websockets traffic;
  •, a communication framework for real-time web applications
  • Docker, a runtime for linux containers.
  • Recipes for automatically deploying NodeJS, Django, Memcache and dozens of other software components as cloud services.

All these components are already available, and the open-source community is using them to build alternative implementations of dotCloud’s development sandbox. We want to make that even easier by open-sourcing the remaining proprietary components – including our uploader, build system, database components, application server configuration, and more.

To learn more about future open-source announcements, follow the Docker repository and join the Docker mailing list.


2. Sunsetting the hosted sandbox

In order to properly focus resources on our ongoing open-source effort, we will be phasing out the hosted version of the free Sandbox. Going forward, the recommended way to kick the tires on dotCloud will be to deploy a Live dotCloud application. For your existing Sandbox applications, we can provide an easy upgrade. If you don’t feel ready to pay us quite yet, take a look at what the community is building.

Below is a calendar of the sunset. As usual, our support and ops team will be happy to assist you in every way we can during the transition.


Date Change to Sandbox
April 8th (no change)
April 22nd All Sandbox applications will be unreachable via HTTP. You can still access them via SSH to download your code and data.
April 25th All Sandbox applications will be destroyed.

Note that we’ve pushed-out the sunset dates since first posting this blog. We’ve removed the ‘no push’ week of April 8 and extended HTTP access to the 22nd. 

How to Graduate from the Sandbox

We’ve made it easy for you to change your Sandbox application to a Live flavor if you want to keep it running on the dotCloud platform:

  1. add your billing information to your account and
  2. file a support ticket telling us which applications to migrate. Please use your account email address and give the full URLs to the applications.
  3. We’ll do the rest.

If you don’t want to move to a paid service, you can use several techniques to download your data and files before they are destroyed.

For those of you who have been using the Sandbox as staging for paid applications, we’re sorry for the inconvenience. We hope our hourly billing will help keep your staging and testing costs down, and that developing in a paid service will ease testing related to scaling.

Looking Back, Looking Forward

We want to thank you, our sandbox users, for trying out the dotCloud platform. We hope that you will enjoy experimenting with our open-source version, discovering the awesome features of our Live flavor, or both!

We look forward to helping you be the most awesome and productive developers out there.

Happy hacking!

/Solomon Hykes

CEO, dotCloud


Things in nightlife are very subjective because it is a business based off of people first, and products (alcohol) come second, so it is hard to build an algorithm to replicate the job of an operator or doorman as far as reservations via a website go” @NYNightLife

The Bar and Nightclubs industry is a $23Bn fragmented industry with high turnover. IBISWorld’s Bars & Nightclubs market research reported that there are approximately 65,774 family-owned and operated businesses in US, with 98% of them employing fewer than 50 employees. The competition for clientele is extremely keen, especially with high concentrations of clubs in metropolitan cities.

It is tougher for nightclub owners than restaurant owners to turn a profit as there are fewer hours of operations and fewer days of operations per year. To add to the problem, nightclub clientele tend to occupy tables until closing and /or occupy their tables longer than dining in restaurants which means an empty table is lost revenue.

According to Chef’s Blade, there are many fixed costs that nightclub owners cannot change such as rent, equipment, insurance, inventory, payroll, and others. Clubbing Owl aims to provide   a full suite of venue management and outbound marketing software to nightclub owners so that they can positively impact cash flow.

CheckIn_largeUnlike other traditional club management software that serves back office, Clubbing Owl is designed to serve 3 communities – club-goers, nightclub owners, and promoters.

For club-goers, Clubbing Owl’s platform can confirm guest admissions through SMS text messaging. The system is integrated in real time with guest list management so that no guests is ever turned away at the door. The integration with Facebook allows Clubbing Owl to update the club-goers’ Facebook status once they have been confirmed. The status updates not only let their friends know about the clubs they frequent but also allow club owners and promoters to tap into their guests’ network of friends.

For promoters, Clubbing Owl can help promoters with their guest list management. Promoters can send SMS communication to club-goers as soon as they are confirmed on the guest list.

For nightclub owners, Clubbing Owl provides live chat so that the entire staff and extended team of promoters can communicate in real time using smartphones and tablets. Clubbing Owl’s Host Check-in app is also synchronized with guest confirmation.

Continue reading

PyCon 2013 March 13-21 in Santa Clara, CA

We couldn’t be more excited for PyCon 2013!

If you’re new to PyCon we suggest catching up with PyDanny’s Beginner’s Guide to PyCon. It’s a 4-part series but his most recent posts cover the actual conference days. Here’s his guide to Friday and Saturday events.

Open Spaces

Open spaces are a way for people to come together to talk about topics, ideas or whatever they want. There’s a board by the registration booth where you can schedule an open space.

The open space schedule is like an un-conference. Anyone can suggest a topic, claim a room and dazzle attendees. You can find the tentative schedule here.

Team dotCloud will have our own open space on Saturday night from 9-10pm in room 202 to showcase “Buildbot on dotCloud”.

Buildbot on dotCloud

Continuous integration and testing is critical to application performance. At dotCloud, we have implemented Buildbot at a large scale to make our platform more reliable and robust. You should too.
In this open session, dotCloud engineer Daniel Mizyrycki will share how we implemented Buildbot at a large scale within dotCloud. He will also show how to easily integrate Buildbot as a service on dotCloud.
Ultimately we will open-source this project on GitHub.

Plus a special Lightning Talk….

We have been working on something big here at dotCloud and we can’t wait to unveil it at a Lightning talk session during PyCon.

Stop by booth #157 for more details and to meet with our engineers.


dotCloud gave us the solid and flexible abstraction layer we needed to get our business off the ground. We went from a working prototype to a professional product in just under four months.” Brian Schwartz, Founder & CEO

Titan modernizes traditional IT procurement by taking the process online through a competitive marketplace similar to what NASDAQ has done for institutional trading by providing a view into market participants’ quotation activity. By enabling the most qualified local suppliers to compete for IT projects nationwide, companies can now get the highest quality IT services for the best possible price. The company name comes from the Greek gods. Titan represents incredible strength, and one can’t spell Titan without “IT”.

The founders of Titan have a deep background in private equity and enterprise IT procurement. Since most companies lack expertise in technology, large IT purchases are often made with insufficient information, which results in non-competitive pricing or overpaying. This is in stark contrast to financial trading transactions where market efficiency prevents sellers from charging more than fair market price.

Envision a restaurant chain looking to procure software to optimize their inventory management operations. Generic software isn’t going to bring enough efficiency to their unique operational structure and hiring a local supplier for ongoing customizations to make it fit is an expensive and endless endeavor. Alternatively, a global supplier can build custom software to the exact specifications, software that will scale as the operations grow more complex, but add a prohibitive premium over development costs.

Titan eliminates this tradeoff between quality and price by allowing companies to bid out their IT projects in a competition where only the qualified suppliers can participate. As a result, companies can get high quality IT resources that meet their requirements and at the fair market price.

How Titan Works

Titan lets companies post their IT project along with any requirements, and then qualified suppliers have 2 weeks to submit competitive bids. Once the winning bid is chosen, Titan processes the work contract and handles milestone payments through their escrow service. It’s a win-win for buyers and suppliers, and is 100% free to use.

titan_product_screenshotTitan specializes in the supplier due-diligence process, which is particularly important to get right when it comes to large IT projects. In order to bid, suppliers must prove they are both financially secure and qualified for the particular project. Titan’s qualification process includes performing annual business credit checks, interviewing the team, and verifying past work references. For example, when a supplier makes a claim on Titan to have worked with a Fortune 500 client, this means that Titan has already verified this claim with positive references provided by the supplier.

Continue reading

A couple weeks ago, I presented “Distributed, Real-time Web Apps with” at San Francisco’s February Cloud Mafia Meetup. If you’d like to learn more about dotCloud’s project you can download the slides and view a short demo video here.

About is an open-source communication framework that allows to rapidly build real-time web applications with a strong RPC flavor.

It possesses two core supporting technologies. The ZeroRPC library on one hand lets us build a robust service-oriented architecture on the back-end. This makes it simple to decompose complex code into functionally simple components in a flexible, language-agnostic way. The library, on the other hand, enables real-time communication with web clients through the use of  WebSockets (and gracefully degrades for older or less capable browsers). also provides powerful functionality on top of its RPC layer, such as streaming responses, built-in authentication and request middlewares.

View the Slides

View the Live Demo

More information

See our website for documentation, and check out the source code on GitHub.

About Joffrey

Joffrey is a software engineer at dotCloud. After having worked on several projects involving real-time communication in node.js, including one in partnership with Salesforce Europe, he joined dotCloud to work on a real-time communication framework that would ultimately become
He has a masters degree in computer science from the French engineering school EPITA.

Check out his GitHub repo at