Chaining, Base Fields, Advanced Prompting and Processors

Last updated on
19 May 2024

While AI Interpolator can be used as an editorial improvement tool, the main strenght lies in automation and longer logical chains of generations where you can do things like generating full articles from as little as a one sentence prompt.

To be able to do this, all interpolators have a processor and weight and most have base fields and some prompts.

Base Field

The base field is the field that we want to use as the input for the rule that is about to run. Two examples:

  • We want to summarize a text. The base field would be the text we want to summarize. This can be incorporated into the prompt.
  • We want to transcribe an audio file. The base field would be the audio file field.

Since the rules knows what field type it is about to generate and from what it wants to generate, only the fields possible will show up here.

If there is no base field that it can generate against, this will show up empty.

Simple Prompting vs Advanced Prompting

When a prompt is needed for a rule and when the internal AI Interpolator way of prompting is used, there is always a simple context fetcher. This means that there is usually a placeholder for different values that can be fetched from the base field, the most common are {{ context }} and {{ raw_context }}.

Thos can be pasted into the prompt and will be replaced with the actual text. The Raw Context can be used when the base field is a formatted text and you want to keep the HTML in the query (for instane to ask about all the headers in a HTML document).

There are however jobs that requires more advanced prompting from multiple fields. For that the advanced prompting exists. To be able to enable advanced prompting you must install the Token module.

Examples of advanced prompting cases could be when both the title and description is needed in the prompt or when you want to prompt with content from other entities.

Weight

Weight is just a simple setting that decides when a job is triggered. The lower the value, the earlier it is triggered. Think about a job where you want to summarize the content of a specific article on cnn.com. This has two steps - scrape content and summarize content.

In that case for instance the scrape content job has to have a lower weight than the summarize content job, since there won't be anything to summarize otherwise.

Note that the weights will only be followed when all rules are in the same processor.

Processors

The processors is rule how each job should be processed. There are currently four processors available. 

The processors are plugin based, meaning you can easily add another way of running the rules if you want.

Direct Processor

This is the simplest. When an entity gets saved it runs this directly on the same request. If you know that the rules will run very fast this is a good option. However if you run long generation jobs this might lead to server timeouts.

Batch Processor

This use the Batch API of Drupal to make sure that each rule is run in one request, by using Javascript to trigger them. This also gives you a progress bar. This is the best rule for websites that generate all the content via the Drupal GUI. However if you want to run automation jobs in the background or anytime you want to programatically save the entities, it fails.

Queue/Cron Processor

This runs the jobs via the Queue Operations API. This can be triggered via the queue:run command in Drush or run one job per cron job. This is the best for automation and for long-running jobs like automated video descriptions. It however means that you won't see any result when you save the entitity.

Use the autogenerated AI Interpolator Status field to figure out if the process is finished.

NOTE: Queue Unique is a must to install to use this, otherwise it might queue up the same job multiple times.

ECA Processor

This processor is available by installing the ECA Processor module. It works together with the fenomenal ECA module to open up the possibility for extremely flexible workflows, with conditions and actions based on the generated content. Things like AI-based unpublishing or role based generation becomes easy to implement.

Note that while the ECA module can feel quite compex when starting out, its extremely powerful when you get to know it.

Post OB Processor

This processor is available by installing the Post OB Processor module. It has a very specific purpose of being used for API purposes. How it works is that it lets Drupal finish all the things it needs to do on entity insert or update and flushes out the result to the browser, but then continues the PHP process in the background.

This means that you can setup for instance a JSON:API, where the user sends a POST (create) request and gets the answer back that the entity is created and the UUID.

They can then poll against the GET endpoint and check the autogenerated AI Interpolator Status field for whenever the process is done.

Please only set this up when you understand the implications of it.

Help improve this page

Page status: No known problems

You can: