Subscribe to Forum One feed
Extend Your Influence
Updated: 13 min 43 sec ago

Styles of Storytelling: Cultivating Compelling Long-form Content

May 1, 2015 at 12:00pm

We are pumped to talk about styles of storytelling on Tuesday, May 12, 2015 at 2:15 – 3:15pm.

In our DrupalCon session, we will:

  • dissect what’s become a major buzzword – “long-form content.”
  • take a look at different types of long-form content
  • explore how “story” fits in
  • uncover what types of storytelling best suit your needs
  • help you figure out when long-form is right for you and your organization

Many organizations are embracing storytelling techniques to better connect with their audiences and drive them to action. They’re implementing long-form content as a platform for storytelling making use of its rich imagery, interactive elements, and better sharing capabilities.

People generally learn more and remember more when more of their senses are engaged by a story. Stories that include images get about twice the engagement as text-only stories. Stories told with visual elements are instantly captivating. The more senses that are engaged, the more emotions will be engaged and the more memorable the experience will be.

So come join us! And in the meantime, get yourself excited about storytelling by checking out this TED talk by Pixar writer Andrew Stanton.

Categories: Planet Drupal

Pain Points of Learning and Contributing in the Drupal Community

April 30, 2015 at 9:14pm

At Drupalcon L.A. I’ll be co-leading a core conversation about the, “Pain Points of Learning and Contributing in the Drupal Community.” A core conversation is not a teaching session, it’s format is a little different and let’s the speaker engage with the audience.

So what is this conversation all about?

I’d like to start with a story. I started contributing to Drupal 8 core just before DrupalCon Portland in 2013. I was listening to  a live hangout with different initiative leads in Drupal 8, and Larry Garfield (crell) was talking about how he needed help with the hook_menu conversion. I asked Larry how can I help and he pointed me to some documentations he wrote on Drupal.org. At this time I took my first steps into core with a normal issue, and I’ve been contributing ever since. This year I’ve been slowly climbing up the contributor list on drupalcores.com.

As someone who puts a lot of energy into contribution, I hope it means something when I say: it’s too hard to contribute to major/critical issues in the Drupal 8 issue queue.

I ran into a great example recently, when I picked up issue 2368769. I figured that after 5 years as a Drupalist, I must be able to make some meaningful contribution to this critical bug. Boy was I wrong! What did they mean by “lazy-loading route enhancers”? I searched the codebase and Drupal documentation, and couldn’t find any example to work from. I found generic Symfony documentation on the subject, but it still wasn’t enough.

What’s going on in the issue queue?

This story reveals a bottleneck in the Drupal 8 development process: the top contributors. There is a group of 50 – or perhaps fewer – who understand and are current on the ongoing major/critical issues with Drupal 8. We all appreciate their incredible hard work, especially since most of them are contributing in their personal time. But in my case, even as an experienced Drupalist and core contributor, I was stuck! Asking top contributors for help in IRC is always an option, but it distracts them from their own work/concentration/thought process  – we don’t want to see top contributors spending 90% of their time answering questions!

So how can we make it possible for non-top-50 contributors to help out on major/critical issues? How can knowledgeable Drupalists who want to contribute to major/critical issues make life easier for top contributors, instead of harder? What are some ways to get knowledge transfer outside of that group?

With just a little more guidance, people outside that “top 50” group could do so much more than the “novice” and “normal” issues we presently tackle. We talk about “continuous contribution” in Drupal 8, where a contributor doesn’t hesitate to work on the issues, and if you’re eager to learn every day, nothing should stop you from contributing.

How will the Drupal world look with our new ideas adopted? What could be possible?

In the Drupal community, we always say “if you don’t like something, make it better.” This session is that first step to make this better.

I’m excited to hear suggestions from the community. How do we break the “top 50” limit, and let the next 100 contributors contribute to major/critical issues? This conversation is where we can work on this problem together, to encourage more contributors to stop limiting themselves and get involved on a deeper level. Maybe we’ll even see the benefits as soon as big sprint day on Friday, May 15, 2015. I hope to see more contributors working on critical major bugs/issues. Let’s break the barrier together!

Categories: Planet Drupal

Edge Toolkit: Staying Relevant in the Long Run

April 24, 2015 at 3:01pm

Launching a website is just the beginning of a process. Websites need nurturing and care over the long run to stay relevant and effective. This is even more true for a service or tool such as LibraryEdge.org. Why would users come back if they can only use the provided service once or can’t see progress over time? And how can you put that love and care into the service if it is not self-funded?

This month, LibraryEdge.org released a number of changes to address just these issues.

Helping Libraries Stay Relevant

Before we dive into the release, here’s a bit on the Edge Initiative.

With the changes created by modern technology, library systems need a way to stay both relevant and funded in the 21st century. A big part of solving that problem is developing public technology offerings. Even in the internet-connected age, many lower-income households don’t have access to the technology needed to apply to jobs, sign up for health insurance, or file taxes, because they don’t have personal computers and internet connections. So where can people go to use the technology necessary for these and other critical tasks? Libraries help bridge the gap with computers and internet access freely available to the public.

It’s important that libraries stay open and are funded so their resources remain widely available. By helping library systems improve their “public access computers/computing,” the Edge Initiative and its partners have made major strides in making sure libraries continue to be a valuable resource to our society.

That’s where LibraryEdge.org comes in. The Edge Coalition and Forum One built LibraryEdge.org in 2013 as a tool for library systems to self-evaluate their public technology services through a comprehensive assessment – plus a series of tools and resources to help library systems improve their services.

New Functionality

Reassessment

The biggest feature update we recently launched was enabling libraries to retake the Assessment. They can see how they have improved and where they still need work compared to the previous year. To create a structure around how libraries can retake the Assessment, we built a new feature called Assessment Windows. This structure allows the state accounts to control when the libraries in their states can take the Assessment. States now have control over when their libraries conduct the Assessment and can track their libraries’ goals and progress on Action Items. This feature allows states to more accurately assess the progress of their libraries and adapt their priorities and programming to align with library needs.

Edge State

Results Comparison

The Edge Toolkit was initially built to allow users to view their results online, along with providing downloadable PDF reports so libraries can easily share their results with their state legislatures and other interested parties. Now that libraries can have results for two assessments, we’ve updated the online results view and the PDFs. Libraries can now see a side-by-side comparison of their most recent results with their previous results.

table

Graphs

It’s common knowledge that people retain more of what they see, so we’ve also visualized important pieces of the results data with new graphs. If a library has only taken the assessment once, then the charts will only display its highest and lowest scoring benchmarks. However, if they’ve taken the assessment a second time, they can also see bar graphs for the most improved and most regressed benchmarks.

graphs

Improved User Experience

Interviews

We made a number of enhancements based on feedback from libraries that have been using the tool for the past couple of years, as well as from interviews that we conducted with State Library Administrators. Starting with a series of interviews gave us great insight into how the tool was being used and what improvements were needed.

New Navigation

The added functionality of being able to retake the Assessment increased the level of complexity for the Edge Toolkit. So we redesigned the interface to guide users through this complex workflow. We split out the Toolkit into four sections: introduction/preparation, taking the assessment, reviewing your results, and taking action. This new workflow and navigation ensures a user is guided from one step to the next and is able to complete the assessment.

Notification Messages

Several dates and statuses affect a library system as they work through the assessment, such as how long they have to take it and whether it is open to be retaken. We’ve implemented notifications that inform the user of this information as they are guided through the workflow.

new

Automated Testing

When we release new features, we need to ensure other components on the site don’t break. Testing this complex of a system can take a long time and will get expensive over the lifetime of the site if it’s done manually. Furthermore, testing some sections or statuses involves taking a long assessment multiple times. In order to increase our efficiency and save time in our quality assurance efforts, we developed a suite of automated tests using Selenium.

What’s Next for Edge

The updated LibraryEdge.org now allows libraries to assess their offerings again and again so they can see how they are improving. Additionally, we’ve built a paywall so Edge can be self-supporting and continue to provide this valuable service to libraries after current funding ends. The launch of this updated site will help Edge remain relevant to its users and, therefore, ensure libraries remain relevant to our communities.

Categories: Planet Drupal

Bike Spike: Scaling Cascade Bicycle Club’s Web Application

January 28, 2015 at 2:57pm

As the largest bicycling club in the country with more than 16,000 active members and a substantially larger community across the Puget Sound, Cascade Bicycle Club requires serious performance from its website. For most of the year, Cascade.org serves a modest number of web users as it furthers the organization’s mission of “improving lives through bicycling.”

But a few days each year, Cascade opens registration for its major sponsored rides, which results in a series of massive spikes in traffic. Cascade.org has in the past struggled to keep up with demand during these spikes. During the 2014 registration period for example, site traffic peaked at 1,022 concurrent users and >1,000 transactions processed within an hour. The site stayed up, but the single web server seriously struggled to stay on its feet.

In preparation for this year’s event registrations, we implemented horizontal scaling at the web server level as the next logical step forward in keeping pace with Cascade’s members. What is horizontal scaling, you might ask? Let me explain.

[Ed Note: This post gets very technical, very quickly.]

Overview

We had already set up hosting for the site in the Amazon cloud, so our job was to build out the new architecture there, including new Amazon Machine Images (AMIs) along with an Autoscale Group and Scaling Policies.

Here is a diagram of the architecture we ended up with. I’ll touch on most of these pieces below.

Cascade-scaling2 Web Servers as Cattle, Not Pets

I’m not the biggest fan of this metaphor, but it’s catchy: The fundamental mental shift when moving to automatic scaling is to stop thinking of the servers as named and coddled pets, but rather as identical and ephemeral cogs–a herd of cattle, if you will.

In our case, multiple web server instances are running at a given time, and more may be added or taken away automatically at any given time. We don’t know their IP addresses or hostnames without looking them up (which we can do either via the AWS console, or via AWS CLI — a very handy tool for managing AWS services from the command line).

The load balancer is configured to enable connection draining. When the autoscaling group triggers an instance removal, the load balancer will stop sending new traffic, but will finish serving any requests in progress before the instance is destroyed. This, coupled with sticky sessions, helps alleviate concerns about disrupting transactions in progress.

The AMI for the “cattle” web servers (3) is similar to our old single-server configuration, running Nginx and PHP tuned for Drupal. It’s actually a bit smaller of an instance size than the old server, though — since additional servers are automatically thrown into the application as needed based on load on the existing servers — and has some additional configuration that I’ll discuss below.

As you can see in the diagram, we still have many “pets” too. In addition to the surrounding infrastructure like our code repository (8) and continuous integration (7) servers, at AWS we have a “utility” server (9) used for hosting our development environment and some of our supporting scripts, as well as a single RDS instance (4) and a single EC2 instance used as a Memcache and Solr server (6). We also have an S3 instance for managing our static files (5) — more on that later.

Handling Mail

One potential whammy we caught late in the process was handling mail sent from the application. Since the IP of the given web server instance from which mail is sent will not match the SPF record for the domain (IP addresses authorized to send mail), the mail could be flagged as spam or mail from the domain could be blacklisted.

We were already running Mandrill for Drupal’s transactional mail, so to avoid this problem, we configured our web server AMI to have Postfix route all mail through the Mandrill service. Amazon Simple Email Service could also have been used for this purpose.

Static File Management

With our infrastructure in place, the main change at the application level is the way Drupal interacts with the file system. With multiple web servers, we can no longer read and write from the local file system for managing static files like images and other assets uploaded by site editors. A content delivery network or networked file system share lets us offload static files from the local file system to a centralized resource.

In our case, we used Drupal’s S3 File System module to manage our static files in an Amazon S3 bucket. S3FS adds a new “Amazon Simple Storage Service” file system option and stream wrapper. Core and contributed modules, as well as file fields, are configured to use this file system. The AWS CLI provided an easy way to initially transfer static files to the S3 bucket, and iteratively synch new files to the bucket as we tested and proceeded towards launch of the new system.

In addition to static files, special care has to be taken with aggregated CSS and Javascript files. Drupal’s core aggregation can’t be used, as it will write the aggregated files to the local file system. Options (which we’re still investigating) include a combination of contributed modules (Advanced CSS/JS Aggregation + CDN seems like it might do the trick), or Grunt tasks to do the aggregation outside of Drupal during application build (as described in Justin Slattery’s excellent write-up).

In the case of Cascade, we also had to deal with complications from CiviCRM, which stubbornly wants to write to the local file system. Thankfully, these are primarily cache files that Civi doesn’t mind duplicating across webservers.

Drush & Cron

We want a stable, centralized host from which to run cron jobs (which we obviously don’t want to execute on each server) and Drush commands, so one of our “pets” is a small EC2 instance that we maintain for this purpose, along with a few other administrative tasks.

Drush commands can be run against the application from anywhere via Drush aliases, which requires knowing the hostname of one of the running server instances. This can be achieved most easily by using AWS CLI. Something like the bash command below will return the running instances (where ‘webpool’ is an arbitrary tag assigned to our autoscaling group):

[cascade@dev ~]$aws ec2 describe-instances --filters "Name=tag-key, Values=webpool" |grep ^INSTANCE |awk '{print $14}'|grep 'compute.amazonaws.com'

We wrote a simple bash script, update-alias.sh, to update the ‘remote-host’ value in our Drush alias file with the hostname of the last running server instance.

Our cron jobs execute update-alias.sh, and then the application (both Drupal and CiviCRM) cron jobs.

Deployment and Scaling Workflows

Our webserver AMI includes a script, bootstraph.sh, that either builds the application from scratch — cloning the code repository, creating placeholder directories, symlinking to environment-specific settings files — or updates the application if it already exists — updating the code repository and doing some cleanup.

A separate script, deploy-to-autoscale.sh, collects all of the running instances similar to update-alias.sh as described above, and executes bootstrap.sh on each instance.

With those two utilities, our continuous integration/deployment process is straightforward. When code changes are pushed to our Git repository, we trigger a job on our Jenkins server that essentially just executes deploy-to-autoscale.sh. We run update-alias.sh to update our Drush alias, clear the application cache via Drush, tag our repository with the Jenkins build ID, and we’re done.

For the autoscaling itself, our current policy is to spin up two new server instances when CPU utilization across the pool of instances reaches 75% for 90 seconds or more. New server instances simply run bootstrap.sh to provision the application before they’re added to the webserver pool.

There’s a 300-second grace time between additional autoscale operations to prevent a stampede of new cattle. Machines are destroyed when CPU usage falls beneath 20% across the pool. They’re removed one at a time for a more gradual decrease in capacity than the swift ramp-up that fits the profile of traffic.

More Butts on Bikes

With this new architecture, we’ve taken a huge step toward one of Cascade’s overarching goals: getting “more butts on bikes”! We’re still tuning and tweaking a bit, but the application has handled this year’s registration period flawlessly so far, and Cascade is confident in its ability to handle the expected — and unexpected — traffic spikes in the future.

Our performant web application for Cascade Bicycle Club means an easier registration process, leaving them to focus on what really matters: improving lives through bicycling.

Categories: Planet Drupal