Give the gift of Drupal. All merchandise is 50% off through 2016.
Together with the local Drupal Community, we are inviting you to join us for Drupal Mountain Camp in Davos, Switzerland. More than 200 attendees are expected to come for sessions, workshops, sprints and stay for the community ... as well as a great amount of powder for those interested in skiing or snowboarding under perfect conditions!Josef Dabernig Fri, 12/02/2016 - 15:58
After a very successful and very interesting Drupal Commerce Camp in 2011, the team of Drupal Events Schweiz decided that it is again time for a Drupal Camp in Switzerland. As Switzerland provides so much more than bright attendees and speakers, we also want to show the beauty of our country and mountains. We found the perfect location for this: Davos!
The camp will happen from 16 to 19 February 2017 at the Davos Congress Centre. We expect around 200 attendees from Switzerland, all over Europe and the world. We will feature a day of summits, two days of sessions, a day fully dedicated to sprints, and social activities each day.
I'm especially excited that Preston So has been confirmed to be the first keynote speaker. He will be giving a talk on "API-first Drupal and the future of the CMS". In addition, we have confirmed a number of speakers internationally & from Switzerland. Interested in presenting? The call for sessions is open until beginning of January.
Sprints are a great place to get involved with development of Drupal 8, join an initiative and get to work with experts and people interested in the same areas. See the sprint sheet to sign up already to join forces for improving Media, Paragraphs, Drupal 8 core as well as the Rules module for Drupal 8.
We are thankful for a great number of sponsors already which help keep ticket prices low. If you are interested in finding Drupal talent or providing your services to Swiss customers, this is a unique opportunity. See the Drupal Mountain Camp website for information about sponsoring or contact Michael directly.
Discounted hotel options are available from CHF 59 per person/night via the following link: http://www.davoscongress.ch/DrupalMountainCamp
Early Bird Tickets are available until end of December for only CHF 80. With your purchase you already get a discount on travels with the famous Swiss railway service. There is more to come!
See you 16-19 of February in Davos, Switzerland. In the meantime, follow us on twitter.
This is again an excerpt from my talk at #DCD2016. The second part of my talk was on Drupal Enterprise Adoption.
Here is where we look at Drupal modules running on less than 1% of reporting sites. Today we investigate Tota11y which helps you visualize how your site performs when using assistive technologies. More info on Blue Beanie Day can be found at bluebeanieday.tumblr.com.
Three weeks ago I wrote about our quest for performance at the Socialist party. This week we had a follow up sprint and I want to thank you for all the comments on that blog.
During this sprint we have been looking into the direction of the amount of groups (+/- 2.700) and whether the amount of groups slowed down the system. We developed a script for deleting a set of groups from all database tables and we deleted around 2.400 groups from the system and we saw that this had an positive impact on the performance.
Before deleting the groups adding a new group took around 14 seconds. After removing 2.400 groups, adding a new group took around 3 seconds. So that gave us a direction in which we could look for a solution.
We also looked what would happened when we delete all contacts who have not a membership from the database and that also had a positive impact but not as huge as the reducing the amount of groups. The reason we looked into this is that around 200.000 contacts in the system are not members but sympathizers for a specific campaign.
We also had one experienced database guy (who mainly knows Postgres) looking into database tuning; at the moment we don't know what the outcome is of his inspection.
From what we have discover by reducing the groups we have two paths to follow:
- Actually reducing the amount of groups in the system
- Developing an extension which does functional the same thing as groups but with a better structure underneath and developed with preformance in mind. (no civicrm_group_contact_cache; no need for nesting with multiple parents; no need for smart groups).
Both paths are going to be discussed at the socialist party and in two weeks we have another sprint in which we hope to continue the performance improvements.
Startups and products can move faster than agencies that serve clients as there is no feedback loops and manual QA steps by an external authority that can halt a build going live.
One of the roundtable discussions that popped up this week while we’re all in Minsk is that agencies which practice Agile transparently as SystemSeed do see a common trade-off. CI/CD (Continuous Integration / Continuous Deployment) isn’t quite possible as long as you have manual QA and that lead time baked-in.
Non-Agile (or “Waterfall”) agencies can potentially supply work faster but without any insight by the client, inevitably then needing change requests which I’ve always visualised as the false economy of Waterfall as demonstrated here:
Would the client prefer Waterfall+change requests and being kept in the dark throughout the development but all work is potentially delivered faster (and never in the final state), or would they prefer full transparency, having to check all story details, QA and sign off as well as multi-stakeholder oversight… in short - it can get complicated.
CI and CD isn’t truly possible when a manual review step is mandatory. Today we maintain a thorough manual QA by ourselves and our clients before deploy using a “standard” (feature branch -> dev -> stage -> production) devops process, where manual QA and automated test suites occur both at the feature branch level and just before deployment (Stage). Pantheon provides this hosting infrastructure and makes this simple as visualised below:
This week we brainstormed Blue & Green live environments which may allow for full Continuous Integration whereby deploys are automated whenever scripted tests pass, specifically without manual client sign off. What this does is add a fully live clone of the Production environment to the chain whereby new changes are always deployed out to the clone of live and at any time the system can be switched from pointing at the “Green” production environment, to the “Blue” clone or back again.
Assuming typical rollbacks are simple and databases are either in sync or both Green and Blue codebases link to a single DB, then this theory is well supported and could well be the future of devops. Especially when deploys are best made “immediately” and not the next morning or in times of low traffic.
In this case clients would be approving work already deployed to a production-ready environment which will be switched to as soon as their manual QA step is completed.
One argument made was that our Pantheon standard model allows for this in Stage already, we just need an automated process to push from Stage to Live once QA is passed. We’ll write more on this if our own processes move in this direction.
As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!
All the update does is mark the permission to administer Elysia Cron as "dangerous" because it allows users to execute arbitrary PHP code. This is by design, it's an explicity feature of Elysia Cron - if it wasn't intended by the module authors it would have been a Remote Code Execution vulnerability. However, users might not be aware that permission grants the ability to execute PHP, hence the security advisory!
Unfortunately, there isn't a way to mark a permission as dangerous under Drupal 6. There isn't even a way to have seperate machine name and human-readable labels for permissions, so there isn't a straight-forward way to add a user visible message. :-(
So, the Drupal 6 Long-Term Support vendors (us included) have decided to simply announce the problem and ask anyone using the Elysia Cron to audit which users/roles have the "administer elysia_cron" permission and make sure it's OK that they can execute arbitrary PHP code.
We're going to be auditting the permission on our client's sites, so, if you're one of our customers - no need to worry! We'll contact you if we have any concerns.
If you'd like us to handle this and similar issues, as well as have all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.
On November 19th, Appnovation held their 1st ever Drupal Code Sprint Day, another sign of Appnovation's strong commitment to the Drupal open source community.
Recent additions to Drupal 7’s MailChimp module and API library offer some powerful new ways for you to integrate Drupal and MailChimp. As of version 7.x-4.7, the Drupal MailChimp module now supports automations, which are incredibly powerful and flexible ways to trigger interactions with your users. Want to reach out to a customer with product recommendations based on their purchase history? aaron Wed, 11/30/2016 - 09:25
An OSTraining member asked us how to setup the message notifications stack in Drupal.
In this particular case, he wanted to know how to set up the message notifications.
Once you install the required modules, you will notice a working notification example already exists, making the setup process easier. Let's start...
Our latest site with Drupal Commerce 1.x went live in July 2016. It is Freitag. Since then we’ve been adding several new commerce related features. I feel it’s time to write a wrap-up. The site has several interesting solutions, this article will focus on commerce.
First a few words about the architecture. platform.sh hosts the site. The stack is Linux + nginx + MySQL + PHP, the CMS is Drupal 7. Fastly caches http responses for anonymous users and also for authenticated users having no additional role (that is, logged-in customers). Authcache module takes care of lazy-loading the personalized parts (like the user menu and the shopping cart). Freitag has an ERP system to which we connect using the OCI8 PHP library. We write Behat and simpletest tests for QA.
We use the highly flexible Drupal Commerce suite. 23 of the enabled Freitag contrib modules have a name starting with ‘commerce’. We applied around 45 patches on them. Most of the patches are authored by us and 15 of them have already been committed. Even with this commitment to solve everything we could in an open-source way we wrote 30.000+ lines of commerce-related custom code. The bulk part of this is related to the ERP integration. Still, in March 2016 Freitag was the 3rd largest Drupal customer contributor.
The words ‘product’ and ‘product variation’ I’ll be using throughout the article correspond to ‘product display node’ and ‘product’ in Drupal Commerce lingo.
ERP is the source of all products and product variations. We import this data into Drupal on a regular basis using Feeds. (Now I would use Migrate instead, it’s better supported and easier to maintain.) ERP also lets Drupal know about order status changes, sends the shipping tracking information and informs Drupal about products sent back to Freitag by customers.
There is data flowing in the opposite direction as well. ERP needs to know about all Drupal orders. Also, we create coupons in Drupal and send them to ERP too for accounting and other reasons.
We send commerce-related emails using the Message stack. This way we can have order-related tokens in our mails and we can manage and translate them outside the Rules UI. Mandrill takes care of the mail delivery.
It was a client requirement to use the Swiss Datatrans payment gateway. However, at the time of starting the project, Commerce Datatrans (the connector module on drupal.org) was in dev state and lacked several features we needed. Pressed for time we opted for buying a Datatrans Drupal module from a company offering this solution. It turned out to be a bad choice. When we discovered that the purchased module still does not cover all our needs and looked at the code we found that it was obfuscated and pretty much impossible to change. Also, the module could be used only on one site instance which made it impossible to use it on our staging sites.
We ended up submitting patches to the Commerce Datatrans module hosted on drupal.org. The module maintainer, Sascha Grossenbacher (the well-known Drupal 8 core contribtor) helped us solving several issues and feature requests by reviewing our patches. This process has lead to a stable release of Commerce Datatrans with a dozen of feature improvements and bugfixes.
Additional to Datatrans we use Commerce Custom Offline Payments to enable offline store purchases by store staff and bank transfer payments.
The site works with 7 different currencies, some of them having two different prices depending on the shipping country. Prices come from ERP and we store them in a field collection field on the product. We do not use the commerce_price field on the product variation.
Freitag ships to countries all around the world. VAT calculations are performed for EU, Switzerland, UK, Japan, South Korea and Singapore. To implement this functionality our choice fell on the commerce_vat module. Adding commerce_eu_vat and commerce_ch_vat released us from having to maintain VAT rates for EU and Switzerland ourselves. For the 3 Asian countries we implemented our own hook_commerce_vat_rate_info().
We have two different VAT rates for most of the countries. This is because usually a lower VAT rate applies to books. Drupal imports the appropriate VAT rate from the ERP with the product variation data. This information is handled by price calculation rules in Drupal.
Freitag delivers its products using several shipping providers (like UPS, Swiss Post) all around the world. Most shipping providers have many shipping rates depending on the destination country, speed and shipped quantity (weight or volume). On checkout the customer can choose from a list of shipping services. This list needs to be compatible with the order.
We used rules to implement the shipping services in Drupal based on the Commerce Flat Rate module. For this end we trained our client to set up and maintain these rules themselves. It was not easy: shipping rules are daunting even for experienced commerce developers. First we needed to set up the “Profile Address” rules components. Then we configured the “Place of Supply” components. We applied these in turn in the condition part of the shipping rules components themselves.
The weakest point of any implementation based on Rules is the maintenance. It’s not easy to find a specific rule after you created it. Having 250 rules components for only shipping made this feeling stronger.
The shipping line item receives the VAT rate of the product with the highest VAT rate in the order.
Freitag has 6 different coupon types. They differ in who can create them, who and where (online/offline) can redeem them, whether partial redemption is possible, whether it’s a fixed amount or percentage discount and whether Freitag accounting needs to know about them or not.
Based on these criteria we came up with a solution featuring Commerce Coupon. Coupons can be discount coupons or giftcards. Giftcard coupons can only have a fixed value. Discount based coupons can also apply a percentage discount. The main difference between them is that customers can partially redeem giftcards, while discount-based coupons are for one-time use.
To make coupons work with VAT was quite tricky. (To make things simpler we only allowed one coupon per order.) Some coupon types work as money which means that from an accounting point of view they do not actually decrease the order total (and thus the VAT) but work as a payment method. Other coupon types however do decrease the order total (and thus the VAT). At the same time Drupal handles all coupons as line items with a negative price and the Drupal order total does decrease in either case.
The solution we found was to use Commerce proportional VAT. Axel Rutz maintains this brilliant little module and he does this in a very helpful and responsive manner. All the module does is adding negative VAT price components to coupon line items to account for VAT decrease. It decreases the order total VAT amounts correctly even if we have several different VAT rates inside the order.
Although there’s always room for increasing the complexity of the commerce part of the site (let’s find some use case for recurring payments!), it’s already the most complicated commerce site I’ve worked on. For this Drupal Commerce provided a solid foundation that is pleasant to work with. In the end, Drupal enabled us to deliver a system that tightly integrates content and commerce.
If you haven’t noticed from our Twitter feed this week we’ve flown the team to Minsk in Belarus to socialise, eat, drink and be merry while maybe taking in a little culture and even some work(!)
One of the first things that we all notice when the team get together is you can’t hug someone over Skype… you can’t make the other person a cup of tea, or complain about the same weather.
While distributed teams can pick the world’s finest global talent as far as timezones and personal or client flexibility allows, meeting in person is a necessary task to undertake on a regular basis. It’s not essential that everyday is spent this way and removing the ability to choose from only your local talent, or those willing to relocate is not the most sensible choice in our modern era of collaborative tools and communication methods. We’d still much rather be distributed but greatly appreciate these times together.
We’ll continue to blog through the week and Tweet some extras as they are happening.
From all of us in our temporary Minsk HQ - have a fun and productive day and if you are sat next to a colleague give them a hug or make them a cup of tea. Not all teams can enjoy this luxury every day of their work life.
Here is where we look at Drupal modules running on less than 1% of reporting sites. Today we'll look at Configuration Split, a module which allows you to export only the D.8x configuration you want to production. More information can be found http://nuvole.org/blog/2016/nov/28/configuration-split-first-beta-release-drupal-ironcamp.
Ever wonder what goes into a conference or other business event that participants will gush about (in a good way) for years? After an event of these mythical proportions, participants walk away raving about the food, the speakers, the social events, the aura, and the list goes on ...
But pulling off such an event is no easy feat. Thus, I decided to speak with Lead DrupalCon Coordinator Amanda Gonser to find out how she manages to make sure the DrupalCon event design is flawless and fits into the overall event planning process seamlessly.
(This video may cause unexpected bursts of laughter. If you cannot laugh in your current environment, please scroll down for the written version.)
What makes a good designer? Well, of course you have to be creative, understand how to solve problems in unconventional ways, and do it all within budget. But wait, there's more to it than being super creative and solving problems. You must be able to make others understand how your design vision solves their problems.
I wanted to find a way to pull data from one Drupal 8 site to another, using JSON API to expose data on one site, and Drupal’s Migrate with a JSON source on another site to consume it. Much of what I wanted to do was undocumented and confusing, but it worked well, once I figured it out. Nevertheless, it took me several days to get everything working, so I thought I’d write up an article to explain how I solved the problem. Hopefully, this will save someone a lot of time in the future.
I ended up using the JSON API module, along with the REST modules in Drupal Core on the source site. On the target site, I used Migrate from Drupal Core 8.2.3 along with Migrate Plus and Migrate Tools.Why JSON API?
Drupal 8 Core ships with two ways to export JSON data. You can access data from any entity by appending
?_format=json to its path, but that means you have to know the path ahead of time, and you’d be pulling in one entity at a time, which is not efficient.
You could also use Views to create a JSON endpoint, but it might be difficult to configure it to include all the required data, especially all the data from related content, like images, authors, and related nodes. And you’d have to create a View for every possible collection of data that you want to make available. To further complicate things, there's an outstanding bug using GET with Views REST endpoints.
JSON API provides another solution. It puts the power in the hands of the data consumer. You don’t need to know the path of every individual entity, just the general path for a entity type, and bundle. For example:
/api/node/article. From that one path, the consumer can select exactly what they want to retrieve just by altering the URL. For example, you can sort and filter the articles, limit the fields that are returned to a subset, and bring along any or all related entities in the same query. Because of all that flexibility, that is the solution I decided to use for my example. (The Drupal community plans to add JSON API to Core in the future.)
There’s a series of short videos on YouTube that demonstrate many of the configuration options and parameters that are available in Drupal’s JSON API.Prepare the Source Site
There is not much preparation needed for the source because of JSON API’s flexibility. My example is a simple Drupal 8 site with an article content type that has a body and field_image image field, the kind of thing core provides out of the box.
First, download and install the JSON API module. Then, create YAML configuration to “turn on” the JSON API. This could be done by creating a simple module that has YAML file(s) in
/MODULE/config/optional. For instance, if you created a module called
custom_jsonapi, a file that would expose node data might look like:
id: entity.node plugin_id: 'entity:node' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi
To expose users or taxonomy terms or comments, copy the above file, and change the name and id as necessary, like this:
id: entity.taxonomy_term plugin_id: 'entity:taxonomy_term' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi
That will support GET, or read-only access. If you wanted to update or post content you’d add POST or PATCH information. You could also switch out the authentication to something like OAuth, but for this article we’ll stick with the built-in basic and cookie authentication methods. If using basic authentication and the Basic Auth module isn’t already enabled, enable it.
Navigate to a URL like
http://sourcesite.com/api/node/article?_format=api_json and confirm that JSON is being output at that URL.
That's it for the source.Prepare the Target Site
The target site should be running Drupal 8.2.3 or higher. There are changes to the way file imports work that won't work in earlier versions. It should already have a matching article content type and field_image field ready to accept the articles from the other site.
Enable the core Migrate module. Download and enable the Migrate Plus and Migrate Tools modules. Make sure to get the versions that are appropriate for the current version of core. Migrate Plus had 8.0 and 8.1 branches that only work with outdated versions of core, so currently you need version 8.2 of Migrate Plus.
To make it easier, and so I don’t forget how I got this working, I created a migration example as the Import Drupal module on Github. Download this module into your module repository. Edit the YAML files in the
/config/optional directory of that module to alter the JSON source URL so it points to the domain for the source site created in the earlier step.
It is important to note that if you alter the YAML files after you first install the module, you'll have to uninstall and then reinstall the module to get Migrate to see the YAML changes.Tweaking the Feed Using JSON API
The primary path used for our migration is (where sourcesite.com is a valid site):
This will display a JSON feed of all articles. The articles have related entities. The field_image field points to related images, and the uid/author field points to related users. To view the related images, we can alter the path as follows:
That will add an included array to the feed that contains all the details about each of the related images. This way we won’t have to query again to get that information, it will all be available in the original feed. I created a gist with an example of what the JSON API output at this path would look like.
To include authors as well, the path would look like the following. In JSON API you can follow the related information down through as many levels as necessary:
Swapping out the domain in the example module may be the only change needed to the example module, and it's a good place to start. Read the JSON API module documentation to explore other changes you might want to make to that configuration to limit the fields that are returned, or sort or filter the list.
Manually test the path you end up with in your browser or with a tool like Postman to make sure you get valid JSON at that path.Migrating From JSON
I had a lot of trouble finding any documentation about how to migrate into Drupal 8 from a JSON source. I finally found some in the Migrate Plus module. The rest I figured out from my earlier work on the original JSON Source module (now deprecated) and by trial and error. Here’s the source section of the YAML I ended up with, when migrating from another Drupal 8 site that was using JSON API.
source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: http://sourcesite.com/api/node/article?_format=api_json ids: nid: type: integer item_selector: data/ fields: - name: nid label: 'Nid' selector: /attributes/nid - name: vid label: 'Vid' selector: /attributes/vid - name: uuid label: 'Uuid' selector: /attributes/uuid - name: title label: 'Title' selector: /attributes/title - name: created label: 'Created' selector: /attributes/created - name: changed label: 'Changed' selector: /attributes/changed - name: status label: 'Status' selector: /attributes/status - name: sticky label: 'Sticky' selector: /attributes/sticky - name: promote label: 'Promote' selector: /attributes/promote - name: default_langcode label: 'Default Langcode' selector: /attributes/default_langcode - name: path label: 'Path' selector: /attributes/path - name: body label: 'Body' selector: /attributes/body - name: uid label: 'Uid' selector: /relationships/uid - name: field_image label: 'Field image' selector: /relationships/field_image
One by one, I’ll clarify some of the critical elements in the source configuration.
File-based imports, like JSON and XML use the same pattern now. The main variation is the parser, and for JSON and XML, the parser is in the Migrate Plus module:
source: plugin: url data_fetcher_plugin: http data_parser_plugin: json
The url is the place where the JSON is being served. There could be more than one URL, but in this case there is only one. Reading through multiple URLs is still pretty much untested, but I didn’t need that:
We need to identify the unique id in the feed. When pulling nodes from Drupal, it’s the nid:
ids: nid: type: integer
We have to tell Migrate where in the feed to look to find the data we want to read. A tool like Postman (mentioned above) helps figure out how the data is configured. When the source is using JSON API, it’s an array with a key of data:
We also need to tell Migrate what the fields are. In the JSON API, they are nested below the main item selector, so they are prefixed using an xpath pattern to find them. The following configuration lets us refer to them later by a simple name instead of the full path to the field. I think the label would only come into play if you were using a UI:
Setting up the Image Migration Process
fields: - name: nid label: 'Nid' selector: /attributes/nid
For the simple example in the Github module we’ll just try to import nodes with their images. We’ll set the author to an existing author and ignore taxonomy. We’ll do this by creating two migrations against the JSON API endpoint, first one to pick up the related images, and then a second one to pick up the nodes.
Most fields in the image migration just need the same values they’re pulling in from the remote file, since they already have valid Drupal 8 values, but the uri value has a local URL that needs to be adjusted to point to the full path to the file source so the file can be downloaded or copied into the new Drupal site.
Recommendations for how best to migrate images have changed over time as Drupal 8 has matured. As of Drupal 8.2.3 there are two basic ways to process images, one for local images and a different one for remote images. The process steps are different than in earlier examples I found. There is not a lot of documentation about this. I finally found a Drupal.org thread where the file import changes were added to Drupal core and did some trial and error on my migration to get it working.
For remote images:
source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: download source: - '@source_full_path' - uri guzzle_options: base_uri: 'constants/source_base_path'
For local images change it slightly:
source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: file_copy source: - '@source_full_path' - uri
The above configuration works because the Drupal 8 source uri value is already in the Drupal 8 format, http://public:image.jpg. If migrating from a pre-Drupal 7 or non-Drupal source, that uri won’t exist in the source. In that case you would need to adjust the process for the uri value to something more like this:
Run the Migration
source: constants: is_public: true ... process: ... source_full_path: - plugin: concat delimiter: / source: - 'constants/source_base_path' - url - plugin: urlencode destination_full_path: plugin: file_uri source: - url - file_directory_path - temp_directory_path - 'constants/is_public' uri: plugin: file_copy source: - '@source_full_path' - '@destination_full_path'
Once you have the right information in the YAML files, enable the module. On the command line, type this:
You should see two migrations available to run. The YAML files include migration dependencies and that will force them to run in the right order. To run them, type:
drush mi --all
The first migration is
import_drupal_images. This has to be run before
import_drupal_articles, because field_image on each article is a reference to an image file. This image migration uses the path that includes the related image details, and just ignores the primary feed information.
The second migration is
import_drupal_articles. This pulls in the article information using the same url, this time without the included images. When each article is pulled in, it is matched to the image that was pulled in previously.
You can run one migration at a time, or even just one item at a time, while testing this out:
drush migrate-import import_drupal_images --limit=1
You can rollback and try again.
drush migrate-rollback import_drupal_images
If all goes as it should, you should be able to navigate to the content list on your new site and see the content that Migrate pulled in, complete with image fields. There is more information about the Migrate API on Drupal.org.What Next?
There are lots of other things you could do to build on this. A Drupal 8 to Drupal 8 migration is easier than many other things, since the source data is generally already in the right format for the target. If you want to migrate in users or taxonomy terms along with the nodes, you would create separate migrations for each of them that would run before the node migration. In each of them, you’d adjust the
include value in the JSON API path to pull the relevant information into the feed, then update the YAML file with the necessary steps to process the related entities.
You could also try pulling content from older versions of Drupal into a Drupal 8 site. If you want to pull everything from one Drupal 6 site into a new Drupal 8 site you would just use the built in Drupal to Drupal migration capabilities, but if you want to selectively pull some items from an earlier version of Drupal into a new Drupal 8 site this technique might be useful. The JSON API module won’t work on older Drupal versions, so the source data would have to be processed differently, depending on what you use to set up the older site to serve JSON. You might need to dig into the migration code built into Drupal core for Drupal to Drupal migrations to see how Drupal 6 or Drupal 7 data had to be massaged to get it into the right format for Drupal 8.
Finally, you can adapt the above techniques to pull any kind of non-Drupal JSON data into a Drupal 8 site. You’ll just have to adjust the selectors to match the format of the data source, and do more work in the process steps to massage the values into the format that Drupal 8 expects.
The Drupal 8 Migrate module and its contributed helpers are getting more and more polished, and figuring out how to pull in content from JSON sources could be a huge benefit for many sites. If you want to help move the Migrate effort forward, you can dig into the Migrate in core initiative and issues on Drupal.org.
We normally share our financial statements in posts about public board meetings, since that is the time when board members approve the statements. However, I wanted to give this quarter’s update its own blog post. We’ve made many changes to improve our sustainability over the last few months and I am fully embracing our value of communicating with transparency by giving insight into our progress.
First, a word of thanks
We are truly thankful for all the contributions that our community makes to help Drupal thrive. Your contribution comes in the form of time, talent, and treasure and all are equally important. Just as contributing code or running a camp is critical, so is financial contribution.
The Drupal Association is able to achieve its mission to unite the community to build and promote Drupal thanks to those who buy DrupalCon tickets and sponsor the event, our Supporters and Members, Drupal.org sponsors, and talent recruiters who post jobs on Drupal Jobs.
We use these funds to maintain Drupal.org and it’s tooling so the community can build and release the software and so technical evaluators can learn why Drupal is right for them through our new marketing content. It also funds DrupalCon production so we can bring the community together to level up skills, accelerate contribution, drive Drupal business, and build stronger bonds within our community. Plus, it funds Community Cultivation Grants and DrupalCon scholarships, removing financial blockers for those who want to do more for Drupal. And of course, these funds pay staff salaries so we have the right people on board to do all of this mission work.
I also want to thank our board members who serve on the Finance Committee, Tiffany Farris (Treasurer), Dries Buytaert, Jeff Walpole, and Donna Benjamin. They provide financial oversight for the organization, making sure we are the best stewards possible for the funds the community gives to us. I also want to thank Jamie Nau of Summit CPA, our new CFO firm. Summit prepares our financial statements and forecasts and is advising us on long term sustainability.
Q3 Financial Statements
A financial statement is a formal record of the financial activities of the Drupal Association. The financial statements present information in a structured way that should make it easy to understand what is happening with the organization's finances.
Once staff closes the books each month, Summit CPA prepares the financial statement, which the finance committee reviews and approves. Finally, the full Drupal Association Board approves the financial statements. This process takes time, which is why Q3 financials are released in Q4.
You can find the Q3 financial statements here. They explain how The Association used its money in July, August, and September of this year. It takes a little financial background to understand them, so Summit CPA provides an executive summary and they set KPIs so it is clear how we are doing against important financial goals.
The latest executive summary is at the beginning of the September financial statement. In short, it says we are sustainable and on the right path to continue improving our financial health.
“We are working on building an adequate cash reserve balance. As of September a cash balance of $723K is 14% of twelve-months of revenue. Summit recommends a cash reserve of 15%-30% of estimated twelve-month revenue. Since Drupal’s revenue and expenditures drastically fluctuate from month to month [due to DrupalCon] a cash reserve goal closer to 30% is recommended.
Through August we have achieved a Net Income Margin of 4% and a Gross Profit Margin 33%. Our goal is to increase the Net Income Margin to over 10% during the next year.”
- Summit CPA
Improving our sustainability will continue to be an imperative through 2017, so the Association can serve its mission for generations to come. Financial health improvements will be achieved by the savings we gain over time from the staff reductions we did this summer. Another area of focus is improving our programs’ gross margins.
You can expect to see the Q4 2016 financials in Q1 2017. You can also expect to see our 2017 budget and operational focus. We are certainly excited (and thankful) for your support and we look forward to finding additional ways to serve this amazing community in 2017.
Have you ever been asked to log into a website while you are viewing a page? And after doing so you get redirected to some page other than the one you were reading? This is an obvious and rather common usability problem. When this happens people lose track of what they were doing and some might not even bother to go back. Let's find out how to solve this in Drupal 8.
In a recent project a client wisely requested exactly that: whenever a user logs into the site, redirect them to the page they were before clicking the login link. This seemed like a very common request so we looked for a contrib module that provided the functionality. Login Destination used to do it in Drupal 7. Sadly the Drupal 8 version of this module does not provide the functionality yet.
Other modules, and some combinations of them, were tested without success. Therefore, we built Login Return Page. It a very small module that just does one thing and it does it well: it appends
destination=/current/page to all the links pointing to
/user/login effectively redirecting users to the page they were viewing before login. The project is waiting to be approved before promoting it to full project.
Have you had a similar need? Are there other things you are requested to do after login? Please share them in the comments.