When loading the core admin/content/file page the visitor is greeted by a WSOD, with the following error recorded:
PDOException: SQLSTATE[42000]: [Microsoft][SQL Server Native Client 11.0][SQL Server]Column 'file_managed.uid' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.: SELECT TOP(50) fm.[fid] AS [fid], fm.[uid] AS [uid], SUM(fu.count) AS total_count, fm.timestamp AS _field_0, fm.fid AS _field_1 FROM {file_managed} fm LEFT OUTER JOIN {file_usage} fu ON fm.fid = fu.fid GROUP BY fm.fid ORDER BY fm.timestamp DESC; Array ( ) in PagerDefault->execute() (line 79 of includes\pager.inc).

This is running on IIS 7.5 on PHP 5.4 against SQL Server 2008 (v10.50.4021).

I tried running the query directly from the SQL Server Management Studio and it gives the same error message. I've also confirmed that the field does exist, it just can't find it in this query.

Comments

DamienMcKenna’s picture

Here's the SQL Server query, made more readable:

 SELECT TOP(50) fm.[fid] AS [fid], fm.[uid] AS [uid], SUM(fu.count) AS total_count, fm.timestamp AS _field_0, fm.fid AS _field_1
FROM {file_managed} fm
LEFT OUTER JOIN {file_usage} fu ON fm.fid = fu.fid
GROUP BY fm.fid
ORDER BY fm.timestamp DESC

The original MySQL query is:

SELECT fm.fid AS fid, fm.uid AS uid, SUM(fu.count) AS total_count
FROM file_managed fm
LEFT OUTER JOIN file_usage fu ON fm.fid = fu.fid
GROUP BY fm.fid
ORDER BY fm.timestamp DESC
LIMIT 50 OFFSET 0
DamienMcKenna’s picture

Title: SQL error loading core admin/content/file page » Error loading file entity admin page due to improper GROUP BY handling

The error stems from the first query in file_entity_admin_files() in the file_entity.admin.inc file from the File Entity module (v7.x-2.x-dev):

  $query = db_select('file_managed', 'fm')
    ->extend('PagerDefault')
    ->extend('TableSort');
  $query->leftJoin('file_usage', 'fu', 'fm.fid = fu.fid');
  $query->groupBy('fm.fid');
  $query->addExpression('SUM(fu.count)', 'total_count');
  file_entity_build_filter_query($query);

  $result = $query
    ->fields('fm', array('fid', 'uid'))
    ->limit(50)
    ->orderByHeader($header)
    ->addTag('file_access')
    ->execute()
    ->fetchAllAssoc('fid');

The reason for the error is that SQL Server is more strict about how GROUP BY queries are executed - each column that is static in the query must be listed as a new GROUP BY statement.

In this case the solution is to add a GROUP BY statement for each column in the {file_managed} table that is listed in the SELECT statement:

  $query->groupBy('fm.fid');
  $query->groupBy('fm.uid');
  $query->groupBy('fm.timestamp');

.. instead of just adding fm.fid. However, this should be handled dynamically by SelectQuery_sqlsrv->__toString() in select.inc.

DamienMcKenna’s picture

It might be worthwhile to fix the problem in preExecute().

Also, I noticed that the logic for adding the group-by fields to the SELECT list doesn't check if the fields are already included, thus fields can be loaded twice - are there any performance penalties for doing so? It shouldn't be too hard to check that the field isn't already in the list before adding it.

DamienMcKenna’s picture

Component: Code » Select builder
david_garcia’s picture

Project: Drupal driver for SQL Server and SQL Azure » File Entity (fieldable files)
Version: 7.x-1.2 » 7.x-2.x-dev
Component: Select builder » Code
Status: Active » Needs work

I have seen this.
I have done this.
You don't want this.

:)

I think this is not a SQL Server issue, it is a GOOD DESIGN decision not to allow the inclusion of fields in the SELECT that are not present in the GROUP BY, please read:

http://weblogs.sqlteam.com/jeffs/archive/2007/07/20/but-why-must-that-co...

It is actually the sloppy MySQL swallowing everything thrown at it.

Another thing is that we wanted to change the driver to reduce impedance among database engines.

I think it would be a better idea to move this to the file module, where the bad query is being constructed.

Dave Reid’s picture

Devin Carlson’s picture

Status: Needs work » Closed (duplicate)

Closing as a duplicate per #6.