Unmeshed Logo
Back to Blog

Stop Polling: Build File-Driven Workflows with Unmeshed

Replace cron jobs and polling scripts with a single file-watcher step in Unmeshed. See the workflow definition, the ENTRY_CREATE configuration, and how to react to new files the moment they appear.

When the trigger is a file, not an API

Not every system gives you an API. Sometimes the only signal you get is a file appearing on disk: a batch job drops a CSV at 02:00, an upstream service writes a log, an SFTP sync deposits a report into a shared directory. The hard part is not reading the file. The hard part is knowing the moment it lands.

Most teams solve this with a cron job or a polling loop that lists a directory every few seconds. It works, but it is slow, wasteful, and the logic ends up scattered across scripts that nobody owns.

This post walks through file_watcher_demo, an Unmeshed process that replaces all of that with a single watcher step. By the end you will see the full workflow definition, the exact filewatcher.agent configuration, and the pattern you can drop into any process that needs to react to a file event.

File watcher workflow in Unmeshed

What the workflow does

Imagine a service that writes a new log file whenever an overnight job starts. You do not want a custom script polling the filesystem just to detect that moment. You want the workflow itself to stay aware of the event.

file_watcher_demo does exactly that. It generates a unique log file name, clears stale test logs, starts watching /unmeshed/test/logs, writes a new log file after a short delay, and then confirms the watcher caught the creation event before cleaning up.

The shape of the process is simple:

prepare log file name
  -> clear old logs
  -> start watching /unmeshed/test/logs
  -> create a new .log file
  -> watcher detects ENTRY_CREATE
  -> clean up

Inside the filewatcher.agent step

The watcher itself is a single step. It listens for files matching *.log inside /unmeshed/test/logs and completes as soon as a matching ENTRY_CREATE event fires:

{
  "name": "watch_for_new_log",
  "type": "FILEWATCHER",
  "ref": "filewatcher.agent",
  "input": {
    "directory": "/unmeshed/test/logs",
    "pattern": "*.log",
    "watchCriteria": "ENTRY_CREATE"
  }
}

That is the entire trigger surface. No cron entry, no polling loop, no shell script. The step blocks until the filesystem reports a new matching file, then yields control back to the workflow.

Walking through the steps

1. Generate a unique file name. The first JavaScript step builds a name like 1712345678901.log. A unique name keeps the run deterministic and stops the watcher from accidentally matching a leftover file from a previous run.

2. Clear old logs. A short cleanup step removes any existing .log files in the target directory. This is good hygiene for a demo, and in production it is the same pattern you use to drain a "drop zone" before processing.

3. Start the watcher. The filewatcher.agent step shown above begins listening. From this point on, the workflow is paused on a real filesystem event, not a timer.

4. Write a file in a parallel branch. A second branch waits briefly, then writes the log file. The contents do not matter. What matters is the ENTRY_CREATE event that the OS emits when the file is closed for writing.

5. Confirm the watcher fired. A DEPENDSON check verifies the watcher branch reached COMPLETED before the workflow continues. That makes the contract explicit: the process did not just create a file, it proved that Unmeshed observed the creation.

6. Clean up. The generated file is deleted so the directory is ready for the next run.

Why this pattern is worth using

Without a file watcher, the same problem turns into cron entries checking directories on a fixed cadence, scripts polling every few seconds, and reactions that are always a minute or two behind reality. The orchestration logic ends up living outside the orchestrator, which is the worst place for it.

With a watcher step inside the workflow, every one of those concerns moves into one place:

  • workflows react the instant a file lands, not at the next cron tick
  • there is no polling code to maintain or scale
  • file events show up in the same execution history as every other step
  • secrets, retries, and downstream branching stay inside the orchestration layer

This turns a whole class of problems into something you can handle declaratively: new log files written by an application, CSV exports dropped by a reporting job, inbound files copied from an SFTP sync, batch documents generated by another internal service. One step, many use cases.

Full process definition

Below is the workflow expressed as a process definition you can adapt directly.

{
  "orgId": 1,
  "namespace": "default",
  "name": "file_watcher_demo",
  "version": 1,
  "type": "API_ORCHESTRATION",
  "steps": [
    {
      "name": "prepare_log_file_name",
      "type": "JAVASCRIPT",
      "ref": "prepare_log_file_name"
    },
    {
      "name": "clear_old_logs",
      "type": "JAVASCRIPT",
      "ref": "clear_old_logs"
    },
    {
      "name": "watch_for_new_log",
      "type": "FILEWATCHER",
      "ref": "filewatcher.agent",
      "input": {
        "directory": "/unmeshed/test/logs",
        "pattern": "*.log",
        "watchCriteria": "ENTRY_CREATE"
      }
    },
    {
      "name": "create_new_log",
      "type": "JAVASCRIPT",
      "ref": "create_new_log"
    },
    {
      "name": "confirm_watcher_completed",
      "type": "DEPENDSON",
      "ref": "confirm_watcher_completed",
      "input": {
        "dependsOn": ["watch_for_new_log"],
        "status": "COMPLETED"
      }
    },
    {
      "name": "delete_test_log",
      "type": "JAVASCRIPT",
      "ref": "delete_test_log"
    }
  ]
}

Where to take this next

Once the basic watcher is in place, the same shape extends naturally. You can swap the directory for an SFTP mount, change the pattern to *.csv, or fan out to a downstream processor as soon as the file arrives. A few patterns that pair well with this one:

Final thoughts

File-based events are everywhere: logs, exports, batch jobs, inbound integrations. The difference between a fragile setup and a clean one is whether you treat those events as an afterthought wrapped in cron, or as a first-class trigger inside your orchestration layer.

With the filewatcher.agent step, file events become part of the workflow itself. No polling. No cron jobs. The process simply waits for the file and continues.

Replace your polling scripts with a watcher step

You just saw the entire file-driven workflow pattern in Unmeshed. Try it on your own log directories, SFTP drops, or CSV exports and let your workflows react the moment a file lands.

Tell us about the file-based pipelines you are wiring up and we will help you map them into a workflow.

Recent Blogs