How I upgraded to Drupal7. Part2

This is the second part of a series on how I upgraded my website from Drupal 6 to Drupal 7. The sources for my website are freely available on my github account.

Automating as much as possible

When I started working on the new version of the site, I did not only want to be able to install the site from scratch. I also wanted the one-time upgrade from Drupal 6 to Drupal 7 to be as smooth as possible. However, the life of a website doesn't stop once the initial deploy (or 're-deploy' in this case), is done. In fact, it only begins at that point. You will need to keep the site updated with new functionality and, more importantly, with security updates. Sadly, this deployment and updating is often an afterthought, or not done at all. Let's face it, deployment and updates are boring and a risky task. So I wanted a setup where I could as easily as possible deploy and update my site, without having to think much about it.

I have split up everything which needs to be done into a few simple steps and written scripts for it.

  • build: put all the pieces together to form the site. That means, Drupal core, contrib-modules and custom code.
  • release: prepare a release for deployment
  • deploy: deploy the prepared release on the server


In the past I have always used drush up to update core and contrib for sites. While this works most of the time, in my opinion it has some problems (or I'm doing something wrong). To begin, I never succeeded in updating Drupal core with it. It would download the new core into a subdirectory, remove everything (including the .git-directory!) and then complain that it couldn't find an installed Drupal. Other problems I have had with it were reverted updates and not detecting all possible updates. And using patches for core and/or contrib is not easy either.

To overcome those problems I (finally) started using drush make. With Drush make you define a simple makefile which contains all the modules, libraries, core etc you need for your site. You can find mine in profiles/budts_be/budts_be.make. When you run the correct drush make command, drush will download the correct version of Drupal (latest of 7 in my case), and all the required contrib-modules. Contrib modules can also provide their own makefile, which will also be run during this process. The geshifilter is one such module which does this, so you only have to include that module in your makefile and all the dependencies for geshifilter will automatically be downloaded as well.

While Drush make seems to expect to build everything into a new empty folder, including your custom code, and using that for deployment, I chose not to do this. Since I'm deploying using Git, it is easier to just have everything inside one tree and let Drush make download everything in-place. This makes development also easier as you need a fully built Drupal tree anyway. As an alternative you could build into a new directory and symlink your custom code. Anyway, this works for me, it might be different for another project.

During the (re-)build of the site, some additional steps need to be done, so to actually build the site, I wrote a wrapper script around the drush make command. This wrapper script is a simple shell script and can be found in profiles/budts_be/scripts/ The script does the following steps:

  • First it removes almost all the core and contrib files. This ensures that files which are removed from core or contrib modules will also be removed from the repository.
  • Then it runs the drush make command with the correct parameters.
  • Running the drush make will also run the geshifilter.make file to download all the dependencies for that module. However I have a small problem with that. I prefer the contrib modules to be in a contrib-subfolder and define it that way in my makefile. However the geshifilter.make file does not define the dependencies into that subdirectory. So the next step in the script after running drush make is moving the geshifilter dependencies into the contrib-subfolder. If you know a better solution for this, please answer my question on Drupal Answers!
  • Finally the script will apply some local patches. I really try to avoid patching core and contrib as much as possible, however sometimes it is just necessary. Currently I use one patch for the tagadelic module to fix a php-notice. I also created a few patches for the .gitignore and .htaccess files, which are provided by Drupal core. In my opininion those are in fact two files which are ok to modify. Sure, drush make supports using patches from, but it seems to have problems (or no support at all), for local files. Instead I just use the patch-utility for each *.patch file in the patches directory. To keep track of where the patches come from, I keep the git diff commands I used to generate them in the script (code).


As I explained in the first part, all the development currently happens on the drupal7-branch. Once I'm ready to deploy everything to the site I merge everything from the drupal7-branch to the master-branch. This way, the master-branch always, at any moment, only contains deployable code. I also create a tag, so that I can easily find all the 'releases' in the repository. When merging both branches I use the --no-ff argument for git merge, to force Git to always create a merge commit, even when Git would be able to do a simple fast-forward. This clearly shows when a merge has happened in the history. All this merging and tagging is done by simply running the script (code).


On the server-side I chose to deploy directly from the Git repository. For the previous version of the site I was using Rsync. Rsync is certainly better than just FTP-ing your files (which happens a lot and makes me sad). But for me it has some disadvantages compared to Git. The most important reason is that with Git I always know exactly which version of the code is deployed on the server. With rsync this is not the case.

For this operation, I wrote the script (code). This script does a number of things:

  • First it uses Drush to set the site into maintenance mode
  • Then it removes the oldest database backup and renames the most recent database backup so that it becomes the old backup
  • After that it uses the drush status command and some sed-magic to get the name of the correct database and dumps the database to a new backup (so I always keep the current backup + the previous one)
  • Then it does a git pull. It simply pulls the current branch, so the script could also be used to deploy a staging server from the drupal7-branch instead of the master-branch
  • Finally the script runs drush updb, to run all the pending updates. These can be updates for core and contrib module, but also for custom code. Since everything should be in code, adding new functionality to the site is done using features or update methods, so running the update-methods should be enough to enable new functionality
After the script has run, I have a new backup of the database, the new code is on the server, all the updates have been executed and the site is running in maintenance mode. So the only thing left to do is a quick check that everything is still ok and bring the site back online. To bring the site back online, I added the script (code).

The full scenario

When I now receive the dreaded "New release(s) available"-mail, upgrading is a rather simple process:

# On my development machine, on the drupal 7 branch
cd /path/to/my/drupal
# rebuild the site, using the updated modules/core
# run the updates on my local development version
drush --yes updb
# Now my local development version is upgraded so test it
firefox http://budts.localhost
# Everything ok, commit it
git add . && git commit -m "updated core and contrib"
# create a release (= merge to master + tag)
# push everything to the git server. Git does not push tags by default
git push && git push --tags

# now let's deploy this on the server, using ssh
ssh the-server
cd /path/to/my/drupal
# run the deployment

# test that everything is ok

# and bring the site back online (disable maintenance mode)

As you can see, to deploy code, most of the crucial parts are covered by the scripts to minimise the work and avoid making mistakes. Also notice that, during this entire process, I don't see the Drupal admin section at all.

To deploy new functionality to the site, the process is similar, except that the first step is not necessary. Apart from that it is exactly the same, because everything is added in features and update methods.



nephastieke on 2012-06-12 17:27 reply


Hey looking good, getting a bit technical. I will have to read up to understand all of it though. How do you backup your database? Is this an automatic process that's in place or do you take manual back-ups?

Can't wait to read the next part!


jeroen on 2012-06-15 20:59 reply

I have two kinds of backups.

I have two kinds of backups. I have a full hosting backup, which backs up most of the files on the webserver and all the available databases. It's a small Ruby script which uses rsync to copy over the new and modified files and a bit of code to get the list of all the databases and then launch a mysqldump for each database. This is the backup for when the webserver explodes :).

The script (code) also makes a backup. However, this backup remains on the web server, it is just in case something awful goes wrong during the update. In that case I can quickly restore the freshly created backup.

Comment Atom Feed