Using voice commands to add items to Sitecore. IFTTT: Complete walk through

Having purchased a few Google Home devices my mind turned to getting them to work with Sitecore. Wouldn’t it be cool to tell your Google Home or mobile device to interact with the Sitecore environment?! So I setup a POC to do just that, by using a custom route in an API controller we can create Sitecore items by asking Google Home to “Create Sitecore item [ItemName]“:

First visit the IFTTT Platform where we can add our Applet. By selecting ‘New Applet’ we’re first asked to create a Trigger, for this we select ‘Google Assistant’

Google assistant

This will present us with some options:

google trigger

We’ll be using a phrase with a text ingredient, this allows us to say something like “Hey Google, add [ingredient] to my request”. For the POC this ingredient will be an item name which will be used to add an item to Sitecore under /sitecore/content/Home/Google Home

Now we’ll tell Google that our phrase will be “Create Sitecore item [ItemName]:

google assitant sitecore

And we need a reply from the device we’re speaking to:

google assistant sitecore response

Great, so we’ve got an “if this”, now it’s time for a “then that”. Our devices know what we’re going to say and know how to respond, so we need to create our API controller in Sitecore that’s going to create the item for us. The rough example I used is below:

    public class GoogleTestController: SitecoreController
        public ActionResult IFTTT(string itemName)
            using (new Sitecore.SecurityModel.SecurityDisabler())
                var masterDb = Sitecore.Data.Database.GetDatabase("master");

                var template = masterDb.GetItem("/sitecore/templates/System/Templates/Standard template");

                var parentItem = masterDb.GetItem("/sitecore/content/Home/Google Home");

                parentItem.Add(itemName, new Sitecore.Data.Items.TemplateItem(template));

                return Content($"{itemName} added");

And we’ll need our custom route

        public void Process(PipelineArgs args)
            RouteTable.Routes.MapRoute("Feature.GoogleTest", "GoogleTest/{itemName}",
                new { controller = "GoogleTest", action = "IFTTT" });

That’s our Sitecore code setup, now we need to add a webhook so that Google Assistant can talk to our Sitecore API. Under ‘then’ we select ‘Add action’ and select ‘Webhooks’:

ifttt then

This presents us with a dropdown and we can select ‘Make a web request’. These fields should be self explanatory, so the one I’ll focus on is the URL. In the below I have used a sample URL but obviously this needs to be a publicly accessible server with the Sitecore item Web API enabled:

Sitecore google api

Now we’re calling our Sitecore API controller, and sending in the name of the item that the user has asked to create. We could stop there and deploy but let’s select ‘Add action’ again and choose ‘notifications’. We want a phone notification when an item has been added to Sitecore:

ifttt notification sitecore

Give the Applet a title and description and save it. The final step is to enable this Applet on our IFTTT devices, simply select ‘Turn on’:

sitecore ifttt applet.png

Once it’s on we can talk to any of the Google devices on our account. I tested this by saying the following to my Google Home… “Ok Google, create Sitecore item test item” I hear the response “Ok, creating item test item on your Sitecore instance”, you can hear her response on this recording. We can then confirm that our device has added the item to Sitecore:

test item sitecore

We also get a nice phone notification:


The possibilities for this one are endless – Sitecore, it’s invading our homes!

Using the OData Item Service in Sitecore 9 – Step by step

Pre Sitecore 9 if we wanted to retrieve Item data we could make use of RESTful services easily with a simple GET HTTP Request. If for example we wanted to get children of an item we could call an endpoint such as:


In Sitecore 9 we also have the introduction of the  OData Item Service so I thought I’d take us through a step by step.

First go to the following item in the Core database:

/sitecore/system/Settings/Services/API Key

Right click the API Key folder and insert an ‘OData Item API Key’. An explanation of the fields for this item can be found in the documentation but here’s my setup with an explanation:


Database – fairly self explanatory, the Sitecore database you’re retrieving an item from.

Search Filter – we can apply search options using an OData filter format, I’ve left this as the default which will return the latest version of items.

CORS Origins – which origins are allowable? Add your required origins separated with a semicolon. As I’m testing this on Helix Base I have added the relevant origin.

AllowedControllers – you may wish to restrict this API Key to certain controllers and can list those here separated with a semicolon. I’m allowing all so have opted for the wildcard *

Impersonation User – you could leave this blank in which case Sitecore will use the default from your Sitecore.Services.AnonymousUser setting. The default is most likely sitecore\ServicesAPI but I’ve added this to the field anyway.

Once we’ve saved the item it’s time to copy the item Id which we can then use in our request headers, or on the endpoint. For example if my item Id is 1234 I could use a Key of sc_apikey  and value 1234 in my headers, or apply it to the endpoint as follows:


Using either approach we can then see the fruits of our labour…


To see a list of endpoints for the OData Item Service see the Sitecore documentation

Step by step: Adding a project to a Helix solution

When adding projects to our Helix solutions there is an order we must follow or problems can arise. For example, the path to our NuGet package folder in our .proj file can be incorrect if we don’t setup the new project correctly. This can result you becoming the token “it works on my machine” person, and nobody wants to be that guy! So I’ve created a step by step guide:

First we add a solution folder to the layer in which we wish to add our module. In this case we’re adding a module to the Foundation layer:


Give your folder a module name, in this instance we’re using ‘SecurityExtensions’. Next create this folder on disk at the correct location:


Back in Visual Studio, next we add a project:


And this needs to be a web application, observe the name and location as this is an important step and don’t forget to assign a target framework:


Next we’re adding the unit tests project:


Click ‘OK’. Now we need to make sure the Web.config file doesn’t get published, set the build action to ‘None’:


Close Visual Studio. Next we move some folders on disk. First rename the folder that has been automatically created:


And change the name to ‘code’ (one way to avoid this step would be to call your project ‘code’, and then rename the project in VS):


Next we need to find our unit tests project which is in the solution root:


We move this folder to our SecurityEditor folder:


Then rename it to ‘Tests’, your structure should appear as follows:


Now we can open Visual Studio and will see an error loading the projects because the path has moved. For each each project we need to change the file path, select the path and click the ‘…’ icon:


Point Visual Studio to the correct location:


Perform this step for both your project and your tests project. Once this is done right click on each of these projects and select ‘Reload Project’.

Finally we can add a publish profile. Create a file system publish profile:


Select Local IIS as the target location and select your Sitecore site:


I usually use a Debug configuration and name the profile ‘Localhost’:


You should now be able to add projects to your Helix solutions.

Sitecore setup considerations

After working with the Sitecore platform for a few years I’ve summed up a few useful tips which I’ve used to enhance the setup. These are mainly based around optimisation but also cover some other issues that can arise. As with any architectural decision it’s best to plan these as early as possible, but the following should be considered whether you already have an instance setup, or if you’re about to embark upon a new project:

Trim your versions

Versioning is something which can be overlooked but our out of the box Sitecore instance will create an endless amount of page versions which carries a very large performance implication. Each time an editor changes a page they create a new version, so we could end up with literally hundreds of versions of pages on our site, most of which are not required.

We need to limit the amount of versions for our Sitecore items. The first tip is to implement a rule to limit versioning and if you don’t read any further, at least pay attention to this step and ensure that you limit your page versions to a useful number such as ten.

File unlocks

There is a common problem for those with more than a handful of content editors whereby editors will often need access to a file has been locked by another user who has forgotten to check it in. For good reason Sitecore out of the box doesn’t allow users to unlock another users items and if organisational governance dictates that only admins can unlock files we’re restricted to the following options:

  1. Make sure editors are educated to check in any items that they have locked – This is unreliable and doesn’t account for human error.
  2. Ensure that the users can always contact an admin – this places a dependency on an admin and could be quite time consuming for them which doesn’t help efficiency.
  3. Write and admin tool to allow mass unlocks – This could be useful but still requires an admin to be present who would have to issue caution when using such a tool.
  4. Make editors an admin – This is a bad idea for obvious reasons. Also, when an admin edits a page it doesn’t automatically create a new version so it adds a requirement for them to remember to manually add a version.

The best solution is to reduce the dependency on an admin by creating a new user access right. We can then allow editors to unlock certain items within Sitecore that have been locked by other editors. Obviously this still requires governance and education in the use of the new feature, but it resolves our issue.


Publishing impacts data, item and HTML cache so we need to use it sparingly and there needs to be a reason for anything other than a Smart publish of items.

It is important to assign the correct user access rights to restrict publishing options. If a user has all publish options available to them they will likely use them so accounts must be restricted, ideally to a Smart publish only. We can restrict the publish options available to our users by setting security in the Core database. It probably goes without saying that a full site publish should be a rare thing, as should a full search re-index.

Patch your configs

This is an absolute must. Patch your configs and leave the originals well alone! For one, it will make life a lot easier when you upgrade a Sitecore instance.

Patch files by default are stored in App_Settings/Include but this setting can be overridden within Sitecore. Don’t forget the ShowConfig.aspx admin tool to view the results of your patch files.

Use Sitecore NuGet

As the title says, make use of the Sitecore NuGet feed and if you’re using VSTS for CI/CD be sure to read the following article.

You should also be using a NuGet package restore for a clean development environment and smaller repository size.

Avoid performance degrading methods

Methods such as GetDescendants in the content API will impact upon site performance as your site data grows. A Sitecore instance which is making heavy use of such methods will suffer from slow performance, instead we should use a Sitecore search API which will make use of Solr/Lucene for queries.

I have also seen performance issues with large Sitecore trees. If for example you have 200 child items which all use a template with a Treelist field, a simple switch to the TreelistEx field type will offer a significant performance increase. We should conduct an audit of our templates and ensure that the field types have been optimised. Obviously we also need to consider buckets.


Although we need to think about the values of Prefetch, Data and Item Cache in our config files I wan’t to remind us of a very obvious tip which is to make your renderings cacheable. There are very few occasions where I have had to completely switch off caching on a rendering, so you should check all renderings and ensure that some form of caching is enabled.

Dependency Injection

Implementing this software design pattern allows us to swap our providers, unit test, and ensures that our code is loosely coupled, reusable and maintainable. However, the use of a DI container carries a performance hit so we should study the DI container benchmarks and choose accordingly. Obviously we should be aware that with an IOC container we can lose compile time checking, and we must carefully think about this architectural decision.


Fields should contain user friendly titles to assist our content editors, this is a fairly obvious step but it’s one that’s often ignored.


Using IIS redirects is best for performance but if for any reason you’re not doing this you need to handle them in Sitecore. We have access to alias’s but it’s a good idea to setup your own 301 redirect component and manage your redirects as items in a folder. Remember to give your folder structure some consideration. It’s easy enough to create your own module or you can follow a 301 redirect setup guide.

Sitecore Optimization – A/B testing

Having presented on this topic at a breakout session at the Sitecore symposium 2016 I thought it would be fitting to write a quick guide. Our ‘Getting Started’ session was based around optimizing Sitecore:

“Great! You’ve got personalization and Sitecore Experience Editor. Now you need to start optimizing with A/B and multivariate testing, and then build segments based on that optimization.”

How do you ‘make your Sitecore environment be the best it can be, fast‘? That involves optimization, and one aspect of this is A/B testing. So why test? Firstly it goes hand in hand with Sitecore’s powerful personalization system – After all why personalize content if the content itself is under performing. A/B testing allows us to see which content works best in which context, we can then go ahead and make use of personalization. The two compliment each other perfectly.

What is an A/B test? There’s plenty on information about A/B testing so I will briefly summarise by stating the obvious… It’s variation A compared to variation B, one ‘thing’ versus another. That could be something as basic as tweaking the design of a call to action on your component, and seeing which variation is more effective by monitoring how many clicks it received (measuring it’s engagement value). The results of the A/B tests are already built into Sitecore and are very clearly presented. Sitecore also measures the customer experience, so we know how engaged our visitors are – useful!

How do we start? First off we start with a hypothesis, we need something to test. At Symposium I hypothesised that we could increase downloads to a PDF (which increases customer engagement) by tweaking the style of a component. We can then test our hypothesis by setting up the A/B test with our goal attached to PDF downloads. Once our test is setup we simply watch the results come in and either let Sitecore pick the winner for us, or select a winner manually based on the results.

Test duration – The duration of a test is important as it has an impact on our results. You may for instance have an eCommerce site of which you’re measuring product sales, obviously external factors could influence a test. For example at times such as Christmas we’ll see a bigger engagement value. So we’re obviously going to see different results at different times in the year – it’s therefore important to take factors such as this into consideration, and we must set a suitable duration accordingly. Sitecore also makes use of gamification when setting up an A/B test – whoever creates the test has to guess the outcome, the more accurate this user is at estimating the result, the shorter the test duration will be suggested by Sitecore.

Test results – Sitecore presents us with a lot of information when it comes to testing, including very clear stats on our goal. I’d recommend you read their documentation about test results. To access the results themselves we can either simply click on the “Test result” button on our page, or visit the Experience Optimization app in our Sitecore launchpad. This app provides a whole host of useful features such as how accurate our users are at predicting the outcome of tests.

Considerations – There are some things to be aware of when setting up a test. First of all if you’re logged in as an administrator remember to create a new version of the page before setting up the A/B test. Tests work via workflow and our non admin users automatically create a new version of a page when they select ‘lock and edit page’. However, this is not the case for an admin, so remember to do this before setting up a test. An alternative is to start tests without using workflow. Finally it’s always good to avoid retrofitting – knowing your goals before embarking upon the implementation of optimization makes life a lot easier. This is the same with any digital architecture, it should be clearly thought through as early as possible.

Sitecore: Remotely executing a Unicorn sync with PowerShell

When trying to run the Unicorn sync via PowerShell I encountered the following error:

Invoke-WebRequest : Access Denied
You need to sign in to Sitecore as an administrator to use the Unicorn control panel.

So I thought I’d write a quick step by step guide for setting this up. The first thing you need to do is add the powershell remote scripting files to your VS project. Once you have done this, create a shared secret in the Unicorn.UI.config file:


The next step is optional but I deleted the MicroCHAP.dll file we just downloaded from my project and updated my reference in the Unicorn.psm1 file to point to the dll in the bin folder:

$MicroCHAP = $ScriptPath + 'PATH_TO_BIN_FOLDER\MicroCHAP.dll'

I then tweaked the sample.ps1 script to make it a bit more flexible:

$url = $args[0]
$deploymentToolAuthToken = $args[1]

if ($url.length -eq 0 -OR $deploymentToolAuthToken.length -eq 0) {
write-error "ERROR - missing url or deployment token."
$ErrorActionPreference = 'Stop'

$ScriptPath = Split-Path $MyInvocation.MyCommand.Path

# This is an example PowerShell script that will remotely execute a Unicorn sync using the new CHAP authentication system.

Import-Module $ScriptPath\Unicorn.psm1

$syncUnicornUrl = $url + "/unicorn.aspx";

Sync-Unicorn -ControlPanelUrl $syncUnicornUrl -SharedSecret $deploymentToolAuthToken


After a publish we can now execute our PowerShell script. Open up PowerShell, navigate to your directory and test the script:

.\sample.ps1 http://dev MY_SHARED_SECRET

This should perform a sync and after confirming it’s working correctly we could now call the script remotely. However, if you receive the error above make sure your version of Unicorn is a recent one (I used 3.2.0), and MicroCHAP must be updated – I had to update mine to 1.2.2 which resolved the error.

Another issue to be aware of is that if you test your PowerShell script and leave the PowerShell window open, your solution may not build as it complains about the MicroCHAP.dll being in use. So be sure to close PowerShell after testing your script.

The purpose of setting this up is to allow us to setup automated deployments for our sitecore items using CI/CD – something which I’ll blog about in future.

CI/CD with VSTS Part 4: Using MS Release Management for Continuous Delivery

In part 3 we looked at a way of setting up CD from VSTS to on premise by deploying directly from our MSBuild definition. This works but it’s best to use a more robust form of implementing CD, and Release Management (RM) allows us to do this. Our deployments will be managed by RM so if anything goes wrong we can easily rollback to previous artifacts. This method for CD is therefore advised and in order to set this up we need to use the same build definition which was setup in part 2 (we’ll be ignoring part 3). This will place our artifacts into the staging directory ready for deployment by RM.

As with part 3, I’m going to assume that your target machine has Web Deploy setup and running.

Go to the release tab of VSTS and create a release definition, start with an empty template:


Next select your project and the build definition, also check the ‘Continuous deployment’ box:



Give your release definition a name and add an environment. Once this is setup, we need to add a task to push our artifacts to our environment. Add the ‘FTP Upload’ task:


It needs to be configured as follows, first the FTP service endpoint needs to be setup by clicking ‘Manage’ and adding a new Generic service endpoint which contains the credentials to access FTP on the environment. Once you have added these, select the endpoint. For ‘Source folder’ simply select the artifacts that we generated in part 2. The ‘Remote directory’ is the folder we’ll be uploading the artifacts to on our environment.

After the FTP step is configured, we then need to install the IIS Web App Deployment Using WinRM extension from the Visual Studio Marketplace to our VSTS. To do so click ‘Add tasks’ and click on the marketplace link:


Select and install the following app:


You will now see some new tasks to add, select the WinRM – IIS Web App Deployment:


Now configure it as follows:



That’s our release definition setup. When we check in our code Release Management will deploy the package to our on premise server using a web deploy. You will see a list of your releases and can select any to view the info:


We’re now deploying from VSTS to an on premise server using the cloud based build agent, and we’re making use of Release Management for continuous deployments. Who said VSTS only deploys to cloud machines 🙂

CI/CD with VSTS Part 3: Using MSBuild for Continuous Delivery

So we’ve followed the first and second articles about setting up CI and we’re ready to implement CD. Just to summarise the objective… after a dev checks in their code to VSTS, we want the hosted build agent to run a build, and then deploy the latest changes automatically to our on premise dev server. Cloud based continuous integration and deployment using VSTS (apart from the on premise server of course, although this server could be hosted anywhere).

I’m going to assume that your target machine has Web Deploy setup and running.

In part 3 we’ll look at deploying directly via MSBuild using a web deploy publish profile, this isn’t recommended as I’d use Release Management instead (which will be covered in part 4) but it’s good to understand our options. This method is also a good way to lower the consumption of your VSTS hosted build agent minutes.

First we need to add a publish profile in VS. Setup a ‘Web Deploy’ publish profile in Visual Studio, and hit the publish button to check that it works:


Once this is active we simply need to edit our Build definition in VSTS, and use something similar to the following build arguments:

/p:DeployOnBuild=true /p:AllowUntrustedCertificate=true /p:PublishProfile=MyNewProfile /p:Password=MyPassword

Make sure the ‘PublishProfile’ name matches the Web Deploy profile that you created.


If you have been following the tutorials from the first article you may have a build step in place to publish build artifacts:


You could go ahead and remove this as the deploy is handled in our MSBuild arguments… but I’d recommend that you use Release Management for your CD which we’ll cover in part 4.

However, if you want a very simple way to implement CD, which will offer a fast build and deployment we’re now up and running – CD from VSTS to on premise! Just remember, if you’re not using Release Management and artifacts (part 4) you can’t quickly rollback a release.

Setting up MongoDB for a local Sitecore instance

There’s plenty of information about setting up MongoDB, particularly on their install guide, but for those who want a straightforward beginners guide here’s an article for you. Let’s add another post to the ‘Getting Started’ series and look at installing MongoDB on a localhost.

The first step is to check Sitecore compatibility. Once you know the required version, go and download it from the MongoDB Download Center. When running the installer it’s probably best to use a simple path such as c:\MongoDB


Once installed browse to your MongoDB directory and create two new folders:


Also add a ‘mongo.config’ file to the MongoDB directory and add the paths to your newly created folders:


Now open up a command prompt in Administrator mode and start your mongo server:


Should you wish to execute any mongo shell commands, you can start mongo.exe:


You could also start MongoDB as a windows service using the following command in a dos prompt:


You may need to wrap the path in “” and don’t forget to check that your MongoDB service is up and running (a restart may be required):


So that’s our MongoDB up and running, I’d advise at this point that you install MongoVUE as a UI.

Next comes our Sitecore connection strings, the syntax is as follows:


So for our analytics database on a demo site we’d use:


These should be configured for you when you install an instance of sitecore using SIM but it’s good to be aware of. Sitecore will use MongoDB for the Experience Database so you should have four connection strings for Mongo Databases which are called ‘analytics’, ‘’, ‘tracking.history’, and ‘’. You can verify these are working correctly by running your Sitecore instance (which will create the databases), and then using MongoVUE to see that your databases have been successfully created:


CI/CD with VSTS Part 2: Extending our CI build definition to create a package

So in an earlier blog we setup a basic working version of CI using VSTS. Now we’ll look at slightly extending this to create a package in preparation for deployments. We could bypass creating a package and automate our deployments using MSBuild arguments which I’ll cover in part 3, but it’s good to be aware of all the options. The creation of a package also ties in to the use of Release Management for CD which is the preferred method.

Let’s look at creating a package which can be used by MSDeploy. To do so is fairly easy and involves MSBuild arguments. We need to edit our build and update our build arguments as follows…

/p:DeployOnBuild=true /p:PackageLocation="$(build.StagingDirectory)"

Once your arguments have been updated, add a build step to your build definition and select Utility->Publish Build Artifacts

Screen Shot 2016-08-11 at 15.02.49

Now add the following settings:

Screen Shot 2016-08-11 at 15.04.10

So we could now use the package that is present in the staging directory for deployment via the likes of release management. Run a build and you will see a new ‘Artifacts’ tab which contains the files for your deployment:

Screen Shot 2016-08-11 at 15.05.38

In part 3 we’ll look at ignoring this step and using MSBuild to deploy directly, but for now we’re understanding packaging which would be the correct way to deploy alongside Release Management.

Note. If you are using the staging directory you will need to update your test assembly path on your test assemblies build step: