Problem/Motivation

Drupal 8 has introduced many new subsystem and modules. While caching has been improved with cache tags and contexts, raw performance both of bootstrap, routing, route access checking and generating full HTML responses is significantly slower.

Proposed resolution

Profile common scenarios (standardized list being worked on at #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them) with warm, 'cool', and cold caches to identify possible optimizations.

  • Warm cache: every possible cache get is a cache hit.
  • Cool cache: site-wide caches are a hit, but cache gets specific to context are cold (i.e. the first time node/2 gets visited when other caches were warmed by /user)
  • Cold cache: most or all cache gets are a cache miss (+ router and container rebuilds).

Once issues are identified, they should be opened according to issue priority (see https://www.drupal.org/core/issue-priority) as a child or related issue to this one.

A performance issue is critical by itself if some of the following are true:

  • There is concrete performance issue identified by profiling (MySQL, PHP or browser equivalent) and a viable plan to resolve it
  • It can't be committed to a patch-level version (8.0.0 => 8.0.1)
  • Over ~100ms or more savings with cold caches and would have to be deferred to a minor version
  • Over ~10ms savings with warm caches and would have to be deferred to 9.x
  • Over ~1ms or more with the internal page cache and would have to be deferred to 9.x
  • Gets measurably worse with lots of contrib modules or large data sets (e.g. non-indexed queries) and would have to be deferred to a minor version
  • Other specific issues at branch maintainer discretion

Remaining tasks

  • Complete the list of 'common profiling scenarios', ensure each has been profiled and issues opened for anything found
  • Try to fix as many issues as possible before we hit zero critical issues, then repeat the profiling to see what's left and review outstanding child issues to confirm whether they're OK to fix after release or should be promoted to critical (due to necessary API changes etc.)

User interface changes

None.

API changes

N/A

Members fund testing for the Drupal project. Drupal Association Learn more

Comments

Wim Leers’s picture

Wim Leers’s picture

webchick’s picture

Those patches only run routing, nothing else, so we can try and compare D7 vs. D8 head-to-head.

webchick’s picture

xhprof disabled, testing 404 page with:

ab -c 1 -n 10 http://d7/foo

Drupal 8 routing system about 2x as slow. (30 vs. 11 ms)

webchick’s picture

When benchmarking node/1, the part of renderForm() that's creating a dummy comment for the comment form is expensive:

http://cgit.drupalcode.org/drupal/tree/core/modules/comment/src/CommentP...

    $values = array(
      'entity_type' => $entity->getEntityTypeId(),
      'entity_id' => $entity->id(),
      'field_name' => $field_name,
      'comment_type' => $field_storage->getSetting('bundle'),
      'pid' => NULL,
    );
**    $comment = $this->entityManager->getStorage('comment')->create($values);
webchick’s picture

Title: Profliling Helper Issue » Figure out why Drupal 8 is slow

By the time the routing system is instantiated, we're using 178 classes, making the class loader most likely a big source of issues.

Service container also getting hit 15 times because of proxy services.

xjm’s picture

Drupal 8 routing system about 2x as slow. (30 vs. 11 ms)

I think this should be "Drupal 8 bootstrap and routing"?

webchick’s picture

0.028 ms for D7 and 1ms for D8. 1/3 of the time in node access checking, instantiating entity manager for the first time.

\Drupal\Core\Entity\EntityManager::getAccessControlHandler

Wim Leers’s picture

As UID 1

/node/1 on D8
Access checking: AccessAwareRouter::checkAccess(): 1.071 ms, of which 0.568 ms are spent in AccessManager::performCheck(), 0.436 ms of that in EntityAccessCheck::access(), 0.401 ms of that in Node::access() and 0.341 ms of that in EntityManager::getAccessControlHandler(). So it's a full millisecond, but at least a third of that is spent on retrieving the Node entity type's access handler for the first time.
Along the way, another 0.200 ms is spent in autoloading classes/interfaces that we need.
/node/1 on D7
Access checking: _menu_check_access() 0.028 ms, because node_access() returns early because UID 1 has the 'bypass node access' permission.

EDIT: fix typo :)

EDIT: To clarify: we tested many scenarios, and did much more of an analysis than just this, but taking notes while doing profiling, makes the profiling work go approximately 20 times slower. So we don't have notes for those. See the comments on https://groups.drupal.org/node/464283

Wim Leers’s picture

Problems + issues based on our profiling work:

3 issues seem like a good fit for somebody who's getting started with using the profiler: they're simple enough to profile, but can still have a noticeable impact. Overall, plenty of issues to investigate how much they would actually gain us!

dawehner’s picture

almaudoh’s picture

Added #2428609: Refactor PathBasedBreadcrumbBuilder to not use AccessAwareRouter for building links as a child issue. Don't really know what the impact of that would be.

effulgentsia’s picture

Should we close #2407177: Profile to determine which services should be lazy and raise the priority of this issue? Or merge them in some other way? Or is it useful to have them be distinct?

effulgentsia’s picture

Priority: Normal » Critical

Doing #13.

effulgentsia’s picture

FileSize
6.24 KB

Here's a patch that adds a few timers to some interesting places. There's probably more elegant ways to do this, so feel free to post patch improvements.

I tested both a minimal profile and a standard profile, each viewing node/1 when logged in as an authenticated user with no other roles (and not user 1), with all caches warm.

I truncated the numbers below to remove insignificant digits. Actually, probably even the remaining last digit is insignificant: successive runs have a fair amount of variance, but I kept clicking the browser's refresh button until the numbers somewhat stabilized.

Minimal profile:

Start to pre session: 11.9ms
Pre session to pre request event: 1.8ms
Pre request event to pre authentication: 0.5ms
Authentication: 0.9ms
Post authentication to pre routing: 5.8ms
Routing: 13.4ms
Post routing to pre controller: 2.4ms
Pre controller to pre view event: 18.6ms
Pre view event to done: 56.6ms

Standard profile:

Start to pre session: 14.0ms
Pre session to pre request event: 2.2ms
Pre request event to pre authentication: 0.6ms
Authentication: 1.0ms
Post authentication to pre routing: 6.0ms
Routing: 16.1ms
Post routing to pre controller: 4.0ms
Pre controller to pre view event: 19.9ms
Pre view event to done: 89.4ms

Some quick thoughts about the above:

catch’s picture

It's interesting that the standard profile is 2ms slower to pre-session, the actual code path should be pretty much the same there. If this is due to number of modules installed or particular request listeners might be worth comparing xhprof runs to that point. Same with the additional time for routing/pre-controller

catch’s picture

#2479459: BlockRepository::getVisibleBlocksPerRegion() does an uncached entity query on every request 8-14ms improvement with minimal profile and I haven't seen an issue for that elsewhere.

fgm’s picture

Just for the reference, D7 had a similar problem: profiling a slow site last week, I just noticed its equivalent standing out on a flamegraph.

Wim Leers’s picture

catch thought of a potential way to speed up routing: #2480811: Cache incoming path processing and route matching.

catch’s picture

Only 0.5ms incl xhprof overhead but completely pointless work #2480943: Reduce strtolower() calls in container get().

catch’s picture

Aki Tendo’s picture

What about the cache assertions I was asked to create?

dawehner’s picture

@Aki Tendo
Please at it as child issue, it would be great!

Aki Tendo’s picture

tim.plunkett’s picture

Title: Figure out why Drupal 8 is slow » [meta] Figure out why Drupal 8 is slow

Unless I'm mistaken, this is effectively serving as a meta.

Wim Leers’s picture

You're absolutely right.

moshe weitzman’s picture

I was thinking that this is an exploration issue. As a meta, this would be a dupe of #1744302: [meta] Resolve known performance regressions in Drupal 8, IMO. One should be closed.

Fabianx’s picture

Title: [meta] Figure out why Drupal 8 is slow » Figure out why Drupal 8 is slow

Yeah, lets keep this one as an exploration issue.

tim.plunkett’s picture

Priority: Critical » Normal

Okay then it's not critical.

Wim Leers’s picture

Better!

Wim Leers’s picture

#2483433: Optimize CacheableMetadata::merge() + BubbleableMetadata::merge() is able to save several thousand function calls on a vaguely complex page like /node/1.

dawehner’s picture

Priority: Normal » Major

Well, IMHO its at least major.

catch’s picture

Just a note I'm using this as a parent issue for things found via profiling - and there's a bunch of child issues now.

catch’s picture

Title: Figure out why Drupal 8 is slow » Identify necessary performance optimizations for common profiling scenarious
Priority: Major » Critical
Issue summary: View changes

Re-purposing this to take over from #1744302: [meta] Resolve known performance regressions in Drupal 8 as 'the performance critical'. We discussed this on the maintainer call a bit a week or so ago.

Where I think this stands is:

- we can't do apples to apples comparisons between 7.x-8.x any more since the performance issues are quite different (i.e. 8.x should be fantastic behind a reverse proxy due to cache tags, certain very complex pages may be better due to render caching, but raw performance of simple pages is much worse). Those comparisons are still interesting, and people will do them, but they don't help to actually improve 8.x performance.

- from recent profiling we know there is a lot of potential to discover actionable performance issues in the code base (I've found over a dozen major performance issues in the past week, suggesting there are still more to be found).

- Not all of those issues need to be fixed before 8.0.0 is released, but we need to know that we'll be able to resolve them within the constraints of 8.x's semantic versioning (i.e. that we won't be stuck with unresolvable issues until 9.x).

Once the current round of profiling has uncovered sufficient issues, we can downgrade this issue to major. Individual performance issues may still be critical, and this issue will be revisited before release with another full round of profiling to ensure no new regressions, as well as confirming that child issues are the right priority.

catch’s picture

Title: Identify necessary performance optimizations for common profiling scenarious » Identify necessary performance optimizations for common profiling scenarios
catch’s picture

Issue summary: View changes
catch’s picture

Issue summary: View changes
Wim Leers’s picture

Issue summary: View changes
plach’s picture

Title: Identify necessary performance optimizations for common profiling scenarios » [meta] Identify necessary performance optimizations for common profiling scenarios

I guess

Fabianx’s picture

Issue tags: +revisit before release
catch’s picture

Issue summary: View changes
FileSize
60.95 KB

OK here's current status with warm caches + opcache on node/1 (page content type), logged out, no page cache.

HEAD:
176ms, 39,456 function calls

Applying:
#2498849: Entity view controller title rendering is expensive
#2479459: BlockRepository::getVisibleBlocksPerRegion() does an uncached entity query on every request
#2483433: Optimize CacheableMetadata::merge() + BubbleableMetadata::merge()
#2501117: Add static caching for PermissionsHashGenerator::generate()
#2498919: Node::isPublished() and Node::getOwnerId() are expensive
(most of these found and/or patch in the past week or so)

163ms, 32,204 function calls

Absolute times are pretty variable - there's some saving but it's coming up as anything between 10ms and 35ms for me. However the reduction in function calls is predictable.

Kazanir’s picture

I've been looking at Drupal performance a lot lately as part of working on the OSS Performance benchmarking suite that Facebook's HHVM team have put together. We fixed a serious problem in their Drupal 7 target and added new targets to the suite for Drupal 8 in both a page-cache-on and page-cache-off flavor.

I put together a more detailed "blog post" about this here: https://kazanir.github.io/profiling. (I don't really blog but it was too much for a single comment.) I'll just hit the highlights here:

- With fully warmed caches (but no page cache), Drupal 8 is still significantly slower than Drupal 7, by a factor of ~3x. Obviously that's not a useful 1-to-1 comparison due to the major feature gains in Drupal 8 -- but it does mean there are hopefully many easy wins on the performance front before a release.

- With that mind, I used the OSS Perf suite to record some profiles. For PHP 5.6 I used Blackfire.io, the new profiling suite from Sensio Labs. Conveniently, Blackfire allows you to share profiling results very easily. To wit:

- D8, Warm Caches, Single Node
- D8, Warm Caches, Front Page
- D8, Warm Caches, User Login
- D8, Page Cache, Front Page

- In addition, running on HHVM instead of PHP allows us to profile using linuxtools perf. I saved these results and made them available on S3: http://paddedhelmets.s3.amazonaws.com/d8perfstats/index.html

These files can be unzipped (see the blog post for details) and the callgraph examined using the same tool. For example, comparing the HHVM "Repo Auth" mode to normal HHVM performances shows that Repo Auth mode entirely eliminates the autoloader's overhead...

When we mentioned the draft of this work in #drupal-contribute several people mentioned that it would be nice to automate this type of performance profiling. I have this worked out to the point where I could do this and put the data somewhere -- I'm just not sure where the best place would be for the statistics and new profile links to go. Thoughts?

benjy’s picture

When we mentioned the draft of this work in #drupal-contribute several people mentioned that it would be nice to automate this type of performance profiling.

I've had this exact thought before, it was be awesome if in the future maybe with the new CI work if it could happen on every issue the same as the testbot does.

Fabianx’s picture

Opened #2501989: [meta] Page Cache Performance.

TL;DR:

- 4,437 us response time for simulated page cache hit

- 25% is lost due to use of Symfony request / response objects and composer autoloader.
- 10% is lost due to more autoloader / request overhead.

=> ~ 35% is lost due to external components performance ( == 1,500 us)

- 15% is lost due to container loading (regardless from where - pure serialization / compilation code loading overhead).

== (2,150 us)

- 25% is lost due to the amount of classes loaded. (54 classes) - pure include statements via loadClass()

== (3,050 us)

=> 69% is pure normal overhead

--

Compared to that:

DB Init + Check: 18% (783 us)

===

If we wanted to make page_cache like on Drupal 7, we would need to:

- Improve vastly on the 3rd party toolset (probably adding some more performant replacement classes, e.g. a combined PSR-4 + APC Autoloader)
- Add patches to Symfony upstream or use more performant classes ourselves
- Optionally move page cache before container retrieval (once content neg is gone, that is possible) and run it just with a bootstrap container, purely configured via settings.php
- Kill request policies - Nice concept, but not performant and for 99% of the cases, if PageCache does not fit your policy, extend class => replace service (via settings.php), done.

===

TL;DR 2:

Drupal 8's page cache is not slow, our used toolset is.

Fabianx’s picture

#44: Thanks so much! That is great.

Yes, automated performance tracking would be really nice.

If "c4.xlarge" finally works without problems for that, we are indeed golden.

I tried that a few years ago and got very fluctuating results, now using a dedicated server instead.

The problems however are the scenarios:

- You _did_ compare a page with views for Drupal 7 - I hope?

==

Nice numbers and great blog post!

We are still working on auth user performance ...

chx’s picture

Drupal 8's page cache is not slow, our used toolset is.

Really. (sorry couldn't resist)

dawehner’s picture

.

Kazanir’s picture

#47: Sadly the Drupal 7 target the HHVM suite uses is "bare" with just Devel Generate content and no additional modules. This is one of many reasons a direct, 1-to-1 comparison isn't really USEFUL. But I didn't think the effort of updating the Drupal 7 target was a worthwhile use of developer hours at this point due to what catch says above -- the comparisons are interesting but the profiling and identification of specific problems is most important.

catch’s picture

Made decent progress on #2494987: [meta-6] Reduce cold cache memory requirements and the child issues so have downgraded it to major. More work to do there but it's in a much better state.

Fabianx’s picture

Category: Task » Plan

Meta == Plan :)

webchick’s picture

Category: Plan » Task

Yes, though until we get at least the counter at https://www.drupal.org/drupal-8.0/get-involved updated for the change (see #2503351: Ensure various queries, links, documentation take into account the new "Plan" issue category) let's not use it quite yet.

chx’s picture

Title: [meta] Identify necessary performance optimizations for common profiling scenarios » Identify necessary performance optimizations for common profiling scenarios
Category: Task » Plan
chx’s picture

Title: Identify necessary performance optimizations for common profiling scenarios » [meta] Identify necessary performance optimizations for common profiling scenarios
Category: Plan » Task

Opsie, sorry didn't see the comments above me (crosspost with two comments left out)

webchick’s picture

chx’s picture

Do we keep [meta] on plans?

dawehner’s picture

I don't know but I kinda like being able to really quickly scan issue titles. Maybe d.o. could append [plan] automatically?

xjm’s picture

Re: #57 and #58, I agree that I'd like to be able to see it from the titles/links still. I guess we could file a followup to #1815826: Add "Plan" category to categorize what is called "meta issues" in core right now.

catch’s picture

OK so there have been several discussion in the past couple of weeks to try to determine something objective that we could demote this issue to major based on.

The following is what I currently feel would be acceptable, but hasn't been agreed on by all core maintainers or anything:

1. Finalize the scenarios on #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them so that we think they catch enough cases to discover most categories of performance issues.

2. This week or next week, profile those scenarios with HEAD and take a note of 1. peak memory usage (via memory_get_peak_usage(), not xhprof), 2. number of function calls in xhprof 3. Number of database queries.

3. Additionally profile those scenarios with HEAD + critical performance patches applied, making a note of the same thing.

4. Those working on performance issues review the results to a. sanity check them as accurate 2. determine which look acceptable vs. unacceptable.

5. We then set targets for the final round of profiling as part of #2485119: [meta] The Drupal 8.0.0-rc1 Release Checklist. Those targets should be the same or lower than HEAD + critical performance patches. We may set them lower in cases where the current situation is not good, but we know there are multiple actionable issues to improve it. This ensures we make forward progress in those areas, rather than for example fixing 10 easy to fix major performance improvements while adding 10 hard to fix major performance regressions and end up treading water on absolute numbers despite actually being in a worse situation overall than before.

6. Optionally, we'll start to add test coverage to codify the status quo/targets. Starting with #2495411: Make simpletest fail a test when it detects pages that need more than 64MB, later adding database queries, then function calls if we figure out way to do it (would have test infra dependencies).

Fabianx’s picture

#60.6. We can do function calls without xhprof, it is ugly, but all that XHProf does is available in userland actually and if we only do it for certain scenarios (and not the whole test-suite) it will work:

http://kpayne.me/2013/12/24/write-your-own-code-profiler-in-php/

TL;DR:

declare(ticks=1);
register_tick_function('do_profile');

- Run backtrace in do_profile and find out currently running function.
- Record [previous_function] => [function] in Array
- Count all elements

alexpott’s picture

catch’s picture

Discussed this issue a bit with Wim Leers in irc.

I now have metrics on #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them. It would be great if someone would be able to reproduce the scenarios and also record the metrics (ping me for spreadsheet access and/or help if you want to do this).

My suggestion for closing/downgrading this issue, after lots of thinking about it the past few weeks:

We know that raw 8.x performance is not how we'd like it to be. There are many actionable performance issues in the queue, but only a handful are independently critical. We have no automated way to detect performance regressions to simplify the process of detecting them before or soon after they go in.

What I really want to avoid is a situation where we commit 20 major performance improvements over the next month, but at the same time add 20 new performance regressions that cancel out those benefits (and might do so in a hard to fix way). Just closing this issue without any hedging against that leaves us open to be in an even worse state than now when we release.

So what I'd suggest is the following:

1. We have current metrics from the other issue.
2. We set a target of getting function calls + database queries for all scenarios 10% lower (or ~10% if we have a reason to make it better/worse for individual scenarios). 10% should be /easy/ for most cases.
3. We set a target for getting the worst case memory usage 10% lower (don't care that much about best case memory usage).

Then:

1. If we hit those targets, we close/downgrade this issue.
2. If we get down to the point where we want to release an RC, and we still haven't met these targets, we close this issue anyway (expected fail style). However we should ensure the numbers are not worse than the current baseline / last time we checked them just to try to catch anything horrible that might have been added in the meantime.

That gives us something objective to aim for, without a situation where we end up holding up the release based on something that will look obscure to people not knee deep in performance.

It'd be good to get feedback on this from Fabianx and others working on performance, as well as the other core committers as to how this sounds for a plan.

Fabianx’s picture

#63 sounds excellent to me.

effulgentsia’s picture

If we get down to the point where we want to release an RC, and we still haven't met these targets, we close this issue anyway (expected fail style).

To be honest, I'm not crazy about pseudo-critical issues in the queue at this point in the cycle. If we think a sooner RC is more valuable than one that is 10% more performant (and I think it is), then I think we should downgrade this issue now, so that the critical queue reflects our honest assessment of what blocks an RC.

What I really want to avoid is a situation where we commit 20 major performance improvements over the next month, but at the same time add 20 new performance regressions that cancel out those benefits (and might do so in a hard to fix way). Just closing this issue without any hedging against that leaves us open to be in an even worse state than now when we release.

Would it be better then to lower our targets every time we knowingly commit a performance improving patch? And as soon as we discover a regression caused by a given commit, decide whether to revert it or raise our targets?

catch’s picture

Would it be better then to lower our targets every time we knowingly commit a performance improving patch? And as soon as we discover a regression caused by a given commit, decide whether to revert it or raise our targets?

That would be great, but I don't think it's workable without automated performance testing.

#To be honest, I'm not crazy about pseudo-critical issues in the queue at this point in the cycle. If we think a sooner RC is more valuable than one that is 10% more performant (and I think it is), then I think we should downgrade this issue now, so that the critical queue reflects our honest assessment of what blocks an RC.

Well we have critical performance issues in the queue and no way to ensure they're not cancelled out by changes elsewhere, except for the plan outlined in this issue.

effulgentsia’s picture

Well we have critical performance issues in the queue and no way to ensure they're not cancelled out by changes elsewhere, except for the plan outlined in this issue.

Except for #2354889: Make block context faster by removing onBlock event and replace it with loading from a ContextManager, I think all of our current critical issues tagged "Peformance" are about cache accuracy rather than speed-up. As for that block context issue, what makes it critical isn't just that it improves performance, but that it requires too much of a BC break to do in a minor release. So even if the improvement from that issue is regressed elsewhere, I don't think fixing that regression would necessarily be critical; it would only be so if it requires such a BC break.

And as soon as we discover a regression caused by a given commit, decide whether to revert it or raise our targets?...That would be great, but I don't think it's workable without automated performance testing.

Yes, I think this is the crux of the issue. We might have existing undiscovered large performance problems that can only be fixed by a large BC break, but people still have time before RC in which to find them, and if no one does find them, then oh well, we can never achieve perfect omnisciency, and shouldn't hold up an RC until we have it. But, if we commit a new regression between now and RC that can only be reverted/fixed with a large BC break, then people don't have a reasonable amount of time in which to discover that, and that's the concern here. Keeping this issue open until we achieve a reduction of 10% relative to some arbitrary state (beta-12?) is a way of hedging for that, in that it will help remove the most likely masks. However, how about also allowing this issue to get closed if/when we get to a point where there's a script that could be run on multiple people's local machines that could generate a CSV file with the numbers being discussed in #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them for every git commit from the point that the agreed upon profile containing those scenarios is committed to HEAD? Because that would let us look at a graph to find commits that caused regressions from that point, independently of improvements elsewhere.

effulgentsia’s picture

But, if we commit a new regression between now and RC that can only be reverted/fixed with a large BC break, then people don't have a reasonable amount of time in which to discover that, and that's the concern here.

At some level, I also think that while this is a valid concern, it shouldn't block the RC and therefore, neither a 10% improvement nor automated profiling should be critical (both can continue to be Major). Because even if RC1 does have some bad performance regression that we didn't catch until after tagging it, we can still fix it between RC1 and release, if it warrants a critical priority.

catch’s picture

Except for #2354889: Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager, I think all of our current critical issues tagged "Peformance" are about cache accuracy rather than speed-up.

Yes, this is because other than a couple of recently committed issues, we've not been able to find 'smoking gun' 1ms/100ms/1s performance improvements (or equivalent) via profiling. I think everyone working on performance is trying very hard to only mark critical performance issues that really do meet the 'performance critical' definition.

However there are lots of 0.5ms/5ms/50ms improvements still to make that cumulatively will get raw performance to a much better place than now. We have pages with literally millions of function calls (at least with some contrib) and are only a hair's breadth under the already doubled Drupal 7 memory requirements. Those fixes should still be prioritized even if no single patch should individually hold up the release.

Keeping this issue open until we achieve a reduction of 10% relative to some arbitrary state (beta-12?) is a way of hedging for that, in that it will help remove the most likely masks

Right that's the idea. It means we don't hold up the RC unless we find something horrible, but means that performance gets prioritised until either we've made a decent improvement or run out of time because everything else is fixed. Closing this issue now without such hedging feels like 'yay, done!' when the situation is still quite bad and could yet get worse. For example I just two days ago committed a patch using PHP reflection in the critical path without profiling, because it was critical and we have an upstream issue to get rid of the reflection need #2408371: Proxies of module interfaces don't work - so no choice there, but not a good feeling either.

However, how about also allowing this issue to get closed if/when we get to a point where there's a script that could be run on multiple people's local machines that could generate a CSV file with the numbers being discussed in #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them for every git commit from the point that the agreed upon profile containing those scenarios is committed to HEAD?

This is a good goal especially if we set the final numbers lower when we achieve better ones, I'd be happier with failing tests than a graph though.

webchick’s picture

We should bear in mind the reality that most people will not be deploying production sites on 8.0.0, both because of lack of available modules, and also because they've learned from experience that .0 releases (of any software) are bad juju. So if 8.0.0 shipped slower than we'd like, and 8.1.0 (or heck, 8.0.1) got much faster from us fixing some really dumb things in major issues post-release, that's just something awesome to highlight in the release notes of the next version.

I feel like we've already given an incredible amount of attention to performance in D8, and certainly moreso than in all other releases combined. There are people such as Wim Leers working full-time on D8 performance since beta, and thousands of dollars of of D8 Accelerate money being spent on performance-related grants, etc.

As such, I'm not really in favour of introducing new performance gates, and I'm definitely not on board with introducing a release blocker related to automated performance testing, which has been around for literally years as an unresolved initiative.

effulgentsia’s picture

However there are lots of 0.5ms/5ms/50ms improvements still to make that cumulatively will get raw performance to a much better place than now. We have pages with literally millions of function calls (at least with some contrib) and are only a hair's breadth under the already doubled Drupal 7 memory requirements. Those fixes should still be prioritized even if no single patch should individually hold up the release.

I think at this point we should remove the "Over ~100ms or more savings with cold caches and would have to be deferred to a minor version" criterion, but would be fine with replacing it with "Over ~50ms or more savings with cold caches and would have to be deferred to 9.x". Because per #70, I think it's fine for all performance numbers (cold cache, warm cache, page cache) to be whatever they are for 8.0 so long as it's possible for them to go from fixed to in-production in less than 6 months (sites that really want those improvements before the following minor can also experiment with applying the correspond patch to their deployment).

I'd also be fine with changing our 1ms/10ms thresholds to 0.5ms/5ms (or whatever other cutoff we think makes sense) so long as we keep the "and would have to be deferred to 9.x" part. Or maybe instead of 0.5/5/50, we could express it as >5% (or whatever other number makes sense) improvement for # of function calls, # of db queries, or memory usage, for any scenario. Again, I'm less concerned about the cutoff number or unit as I am about only holding up RC on the "and would have to be deferred to 9.x" part.

In summary, I'm most concerned about holding a critical open for improving performance in general or out of nervousness that there are still problems that haven't been discovered. But I'm less concerned about promoting a specific performance issue to critical after it's been discovered, if it truly can't be solved in a minor release.

Fabianx’s picture

I don't understand what is so bad about the plan outlined in #63?

I personally think performance is still critically bad and while individual issues are major, they only together become critical, hence a [meta] being appropriate.

Also optimization is a process, not a one-fix-shot. (that would be nice)

I am pretty sure setting good goals (e.g. 10% lower) is the best way we can ensure that we can hit some goals and don't introduce regressions and I am pretty positive we can.

Also fixing performance issues, usually uncovers more things that can be fixed / optimized => One critical [meta] is appropriate.

Also this meta is very actionable right now.

dawehner’s picture

While doing some profiling I used the admin account so tour_page_bottom() got executed. Its not much but at least the noise should be avoided in profiling.
#2520594: Cache tour_page_bottom() is an issue for that.

effulgentsia’s picture

I personally think performance is still critically bad and while individual issues are major, they only together become critical

That's the part I disagree with. An issue being critical means that it is so bad, that it is worth delaying 8.0's release date for it. And given a choice between 8.0 being released sooner vs. being released later but with better performance, I think the vast majority of Drupal users would currently prefer the former. I base that assumption on, among other things, that https://www.drupal.org/project/usage/drupal shows that deployments of new 7.x sites have essentially stopped (or at best, only match the rate at which old 7.x sites are retired).

If we get down to the point where we want to release an RC, and we still haven't met these targets, we close this issue anyway (expected fail style).

That part of the proposal partially addresses my concern, but not fully. Because nothing prevents Major issues from being worked on, so if it doesn't need to block an RC, then why keep it open as a Critical in the meantime? The argument to do so seems based on wanting it to continue to have elevated attention from the people who would otherwise be choosing to put their attention on other criticals. But to the extent that the priority affects what people choose to give attention to, then I would rather they give their attention to the other criticals, since I think a sooner release is more important than a more performant one. And to the extent that priority doesn't affect what people choose to work on, then why keep this open as a critical?

catch’s picture

@webchick

I feel like we've already given an incredible amount of attention to performance in D8, and certainly moreso than in all other releases combined.

That is likely correct in terms of absolute amount of work. However it is also true quantitatively for any other aspect of Drupal 8 you could name (translation, REST, configuration management, accessibility, UX, site builder features, etc. etc.). All the work we've done on various 8.x features has come with varying levels of performance degradation - either unavoidably due to feature addition, or unoptimized code, or usually a combination of the two. So the extra work on other things directly impacts the amount of work required on performance. Or to put it another way, we've done more work to degrade performance in Drupal 8 than in all previous releases combined, too.

Drupal 7 had a vast amount of effort dedicated to performance (I personally spent hundreds if not thousands of hours on it over 2-3 years, some of which paid, most not). Additionally Drupal 7 had the advantage that two large scale installations were based on it - Examiner.com and Drupal Gardens. Examiner funded both dedicated d8accelerate-style and indirect itch-scratching performance and scalability work for over a year before the 7.0 release.

With Drupal 6, Drupal.org went live with one of the RCs iirc (couldn't find the switch date on google) - or at least was running those on staging with load testing prior to 6.0. So for both releases, and especially D7, we had high traffic sites with dynamic/user-generated content that either stood up, or went down and provided information on why prior to release.

We don't have those same reference points this time around - the closest is berdir's site but I'm less (as in not at all) familiar with that as either Drupal.org, Gardens or Examiner. So my level of confidence of the ability for 6.x and 7.x to survive under pressure (high traffic, dynamic/user-generated content etc.) was higher when both of those were at beta (or late alpha for D7) compared to with 8.x at the point we are now - it's not had nearly as much testing in the wild for these kinds of issues. I've personally profiled 8.x a lot less than I did 7.x overall too because I'm also trying to keep track of lots of other things this time.

So on absolute quantitative effort - possibly
On relative effort compared to the scope of the overall release - definitely not.
On qualitative data points via sites running in the wild - definitely not either.

@effulgentsia

. I base that assumption on, among other things, that https://www.drupal.org/project/usage/drupal shows that deployments of new 7.x sites have essentially stopped (or at best, only match the rate at which old 7.x sites are retired).

That is not necessarily a good metric to determine their appetite for an early release vs. experiencing performance issues in that release.

If people build sites on 8.x because 8.0.0 comes out quicker, then those sites fall over due to performance/scalability/stampedes, or if they refuse to even try building sites due to documented issues that we didn't address, then it does not look good for 8.0.0 adoption either.

This could mean abandoning Drupal as a platform altogether, or falling back to pre-migration 6.x or 7.x versions of sites for X months before trying again. "Can I serve my current 6.x/7.x traffic after an 8.x migration" is an important/essential question for people to ask themselves when migrating sites between versions.

Growth/adoption !== health. Sometimes they correlate, often not. Drupal 7 had very slow adoption for the first 6-9 months after release due to Views, upgrade path and various other stability/contrib issues - releasing it 3 months later than we did might not have been a bad thing at all despite the massive relief that people felt when 7.0 came out after a horrible 2 year slog to get there.

The argument to do so seems based on wanting it to continue to have elevated attention from the people who would otherwise be choosing to put their attention on other criticals.

Given we have no way to fail tests on performance regressions yet (unlike even most security issues), then the only way prevent that is continuous attention. The risk of that attention being at the expense of some other critical issues is for me less of a concern than lack of attention meaning the situation gets even worse than it is now (which as documented is very bad) and happens almost by default.

#2497243: Replace Symfony container with a Drupal one, stored in cache is a fundamental issue with a key part of 8.x architecture that was 1. only found due to load testing (afaik) 2. is a critical-and-a-half since we also changed the proxy service implementation to facilitate it 3. Was found just four weeks ago. Similarly the extent (and especially fixability) of the SafeMarkup memory leaks were only found recently too. I know that SafeMarkup is causing berdir's site trouble (when combined with the form cache since they're on an earlier beta) - we're only really just getting to the point where we fully understand these issues in terms of diagnosis the past 2-3 weeks and they're not fully resolved yet at all.

To compare to two other aspects of the release which aren't features:

The security bug bounty is definitely going to mean the release comes out later (or at minimum, not sooner). But not doing that to get the release out sooner would be very shortsighted. Similarly not doing usability testing at UMN would probably mean the block UI critical was never opened (since that's not necessarily something 8.x-familiar people would ever notice by themselves), but not usability testing the release would also have been a serious omission. The advantage with usability testing is we know the interface is stable at this point more or less. Security and performance don't have that luxury with the code base.

While security is unique compared to other areas of the project, something like the container race condition is arguably a lot worse in practice on the vast majority of real sites than config translation XSS or field UI open redirects would have been (in the sense that the chances of either of those resulting in a site being compromised/offline are very low, whereas the chance of a container rebuild taking a site offline are very high). So I really struggle with the desire to take the focus off performance given the state it's in and the rate at which both issues and fixes for them are being uncovered.

xpete’s picture

I'am doing a drupal 7 website and when I'am on a node view page I get an error about something that is wrong on the home page template. I have to fix it but that's not the problem. The problem is why is drupal even loading the home page template when I'am not in the home page? I guess that's one of the reason Drupal 7 e so slow and uses so much memory. It keeps loading things that are not need and I hope(but doubt) this have been fixed in drupal 8. Drupal should ONLY load what it's going to show. This way it's not scalable.

From what I read here and in other performance issues, the way to "fix" most problems is using the cache. I don't think the cache system should even be part of drupal. It's a way to put the performance issues under the rub, not a way to fix them. Maybe it could be a module.

Like i wrote in another issue, as Drupal 8 is a major rewrite I was expecting the focus to be on performance(and usability btw), not on features. I can get more features adding more modules but I can't get better performance adding more modules. Drupal is really great on features.

Webchick comments make me think that Drupal core team doesn't really care about performance.
As Drupal is now based on symphony, do a bechmark of and empty page in symphony and then do the same on an Drupal 8 empty page... Then try to reduce the amount of memory used that to a minimum.

Sorry for the rant.

David_Rothstein’s picture

@xpete, neither Drupal 7 nor Drupal 8 load the home page template when you're not on the home page. This sounds like a problem caused by custom code on your site. Please create a separate issue since it's not related to this one.

Fabianx’s picture

#76: We do care deeply about performance and have been following the principle of Avoid, Cache, Defer first. Therefore we try to avoid doing heavy work, then we cache it and we work hard on being able to defer it.

However:

Object oriented code and hence also Symfony is not necessarily what makes Drupal fast, it is what makes Drupal 8 slow in parts (currently).

See: #2501989-2: [meta] Page Cache Performance for a detailed analysis and later in that issue also when optimizing that, how Performance could be increased.

The reason why Symfony and other frameworks are fast is not because they are optimized for speed (they were not by default - though they have worked hard to improve that), but because they don't have the complexity of a Drupal system.

The problem is Drupal calls a complex function not just once or twice, but 1000s of times potentially. And this numbers are not taken into account by the upstream frameworks, which have maybe 10-20 routes or 20 services or ...

What makes Drupal powerful also gives it complexity and complexity needs time. However the added components have not necessarily been a good fit in their implementation in ensuring that Drupal 8 is faster, though the ideas and concepts they brought to Drupal have strongly advanced our eco system.

And you are absolutely right, cacheability != performance.

However having a strong cacheability system can profit everyone - from smaller sites up to large enterprise sites.

And last:

We need to ship at some point, but we will continue to improve performance throughout the D8 cycle (and hopefully with some selected issues also during the RC phase).

When this issue gets back to major (so we can enter the RC phase because not much else blocks the release), this does not mean that performance is checked off.

It only means that we decided that for the tested scenarios it is 'good enough'.

---

Performance vs. Features - That is always a trade-off to make, but the fastest page is a static HTML page with no features. The problem is the most developer or user friendly solution is not necessarily the fastest one.
(e.g. Using twig has a very measurable overhead, but themers love it!)

All we can do is to encapsulate the additional complexity this adds as much as possible, which includes caching that information.

I still think there is a lot we can do to improve Drupal 8's performance.

xpete’s picture

I know OOP and ORM. I never worked with Symphony but I already worked with other MVC frameworks(CodeIgniter, Zend, Django, ASP .NET MVC). I know how they work.
I know that an MVC fw empty page don't get any data from the database and a Drupal empty page get's lots of data from the database to memory. I guess that's where the biggest problem is. too much data being loaded that is not really needed... but it's just a guess.
MVC frameworks are very good to split complex systems in small parts.

I saw the issue you mentioned but i don't care much about cache inside Drupal. It's a nice to have but it's not used often on a dynamic website. I don't think it's worth the amount of time the developers spent on it.
Getting about 1% improvement when the double of the memory is used like I saw on that issue comment's you wrote is not a big improvement. But it's great to see some work done anyway.

"1000s of times potentially." on one http request? That is just wrong.

"However the added components have not necessarily been a good fit in their implementation in ensuring that Drupal 8 is faster," My guess too... But i think the big problem is in the core.... Drupal 7 already use too much memory for what it did.

"Performance vs. Features - That is always a trade-off to make, but the fastest page is a static HTML page with no features. The problem is the most developer or user friendly solution is not necessarily the fastest one."
True. But there's also acceptable limits to resources usage for a features.

O doubt twig or Symphony are the problems. Template systems are note memory hogs.
I think Drupal need bigger changes that what can be done in a beta or RC phase.

dawehner’s picture

@xpete
Well, Drupal is complex. Let me give you an example. In Drupal you are allowed to create routes via the UI, like for example using views. On top of that we also
have a LOT of routes (1000 potentially). Given that Drupal stores the routes in the database instead of, like many of those frameworks, just in raw code. Just imagine if you would
have to load each view in order to get the routes on each request. Those things though sum up.

On top of that, the concept of an empty page is not really there for Drupal.
We display local tasks + breadcrumbs on every page. We have blocks in all kind of regions etc.

It would be great if we could go back to a pratical part of this issue which is about finding those improvements, on which you have happy welcomed to participate in. #2529514: Replace system.filter::protocols with container parameters for example is one of those small things.

dawehner’s picture

dawehner’s picture

dawehner’s picture

dawehner’s picture

Another issue which does not change performance but makes profiling a bit easier: #2531972: Move ThemeManager::theme() to ThemeManager::render()

dawehner’s picture

xpete’s picture

"have to load each view in order to get the routes on each request. Those things though sum up. "

No need to do that. Only load the views that match the route on the request and/or create a routing table.

"On top of that, the concept of an empty page is not really there for Drupal.
We display local tasks + breadcrumbs on every page. We have blocks in all kind of regions etc."
I know that. By empty page I mean the page i get with a new Drupal instalation without content.

Aki Tendo’s picture

Added #2533116: [meta] Identify candidate code blocks for conversion to generator format for performance improvement. as a child of this issue with a specific focus on areas of the code that could be converted to exploit the PHP 5.5 generator syntax to get a performance boost or two in that manner.

dawehner’s picture

Just some issue linking.

dawehner’s picture

alexpott’s picture

Priority: Critical » Major
Issue tags: -revisit before release

Discussed with @catch, @xjm, @webchick and @effulgentsia. We agreed to downgrade this issue because #2485119: [meta] The Drupal 8.0.0-rc1 Release Checklist contains a specific entry to check performance profiles before release and #2497185: [no patch] Create standardized core profiling scenarios and start tracking metrics for them remains critical and is finally ironing out the bugs in the automated scripts.

The issue is remaining open to track performance work.

Version: 8.0.x-dev » 8.1.x-dev

Drupal 8.0.6 was released on April 6 and is the final bugfix release for the Drupal 8.0.x series. Drupal 8.0.x will not receive any further development aside from security fixes. Drupal 8.1.0-rc1 is now available and sites should prepare to update to 8.1.0.

Bug reports should be targeted against the 8.1.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.2.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.1.x-dev » 8.2.x-dev

Drupal 8.1.9 was released on September 7 and is the final bugfix release for the Drupal 8.1.x series. Drupal 8.1.x will not receive any further development aside from security fixes. Drupal 8.2.0-rc1 is now available and sites should prepare to upgrade to 8.2.0.

Bug reports should be targeted against the 8.2.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.3.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.2.x-dev » 8.3.x-dev

Drupal 8.2.6 was released on February 1, 2017 and is the final full bugfix release for the Drupal 8.2.x series. Drupal 8.2.x will not receive any further development aside from critical and security fixes. Sites should prepare to update to 8.3.0 on April 5, 2017. (Drupal 8.3.0-alpha1 is available for testing.)

Bug reports should be targeted against the 8.3.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.4.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.

Version: 8.3.x-dev » 8.4.x-dev

Drupal 8.3.6 was released on August 2, 2017 and is the final full bugfix release for the Drupal 8.3.x series. Drupal 8.3.x will not receive any further development aside from critical and security fixes. Sites should prepare to update to 8.4.0 on October 4, 2017. (Drupal 8.4.0-alpha1 is available for testing.)

Bug reports should be targeted against the 8.4.x-dev branch from now on, and new development or disruptive changes should be targeted against the 8.5.x-dev branch. For more information see the Drupal 8 minor version schedule and the Allowed changes during the Drupal 8 release cycle.