API Objects

This is an overview of object formats and API basics. For a full list of endpoints go to our swagger docs.


Licenses

Include the license as the body in plain text (Content-Type: text/plain). The response will include the number of documents on the license. The new license will automatically be set to active.

{"success":true, "results": 10000}

Connectors

Representation of Connectors and what auth types they support. The API will only return connectors that the currently active license supports.

Copy
{
  "FILESYSTEM": {
    "authType": [
      "none"
    ],
    "description": "Filesystem Connector",
    "name": "Filesystem Connector",
    "typeId": "filesystem"
  },
  "FTP": {
    "authType": [
      "basic"
    ],
    "description": "Connector to a FTP/S Server",
    "name": "FTP Connector",
    "typeId": "ftp"
  },
  "GOOGLE_DRIVE": {
    "authType": [
      "jwt"
    ],
    "description": "Google Drive Connector",
    "name": "Google Drive Connector",
    "typeId": "googleDrive"
  },
  "S3": {
    "authType": [
      "key"
    ],
    "description": "Connects to Amazon S3",
    "name": "Amazon S3 Connector",
    "typeId": "s3"
  }
}
Key Description
authType An array of authentication options for the connector. Reflects the different types of Authentication Connectors this connector supports
description Connector description
name Connector name
typeId The key to identify this connector type when making API calls

Connections

Connections are a combination of Integration and Authentication connections.

Returned Object

Copy
  {
  "name": "S3",
  "description": "s3",
  "authenticationId": "5f2da102dbca9c0b610e28a1",
  "id": "5f3d37ba4448c83a13785336",
  "properties": {
    "usepreemptive": "false",
    "proxyport": "0",
    "proxyusername": "",
    "aws_secret": "**********",
    "proxyworkstation": "",
    "timeout": "100",
    "proxypassword": "**********",
    "proxyprotocol": "",
    "awsInfo": "",
    "endpoint": "",
    "proxydomain": "",
    "aws_access_key": "AKIAJEQLDNHEINWQHQRQ",
    "region": "us-east-1",
    "proxyhost": ""
  }
}

Fields

Key Description
name Name of the connection
description Description of the connection
authenticationId In the UI there will be an authentication connection associated with this connection. Its name will be "{name} + Authentication". This is the ID for that connection.When updating Tasks and post processors, use this id
id The id of the integration connection. Use this for jobs and job groups (coming soon)
properties A JSON Object containing all the distinct properties for the authentication connection

Creating Connections

The following json is a body to create an Alfresco connector

Copy
{
  "name": "Alfresco Connection",
  "description": "Alfresco connection",
  "typeId": "alfresco",
  "authType": "basic",
  "properties": {
    "username": "user",
    "password": "password",
    "serverUrl": "http://localhost:8080"
  }
}
Key Description
name The name of the connection. Must be alphanumeric and unique
description The name of the connection. Must be alphanumeric
typeId The type id of the connection to be created. If a type that is not under the current license is sent, the call will fail. This value cannot be changed once set
authType The type of authentication to use. If an authType that is not used by the connector is passed in, the call will fail
properties All fields are required for create this connection. If any key is missing from this object, the call will fail. To retrieve these fields, call /v2/connections/{typeId}/{authType}

Updating Connections (PUT)

Use the identical body for creating a connection, but change the values. As stated previously, typeId cannot be changed. If changing the authType, the properties object must contain all fields from the new authentication type, or the call will fail. Attempting to include the id field in the body will also cause the call to fail. This is the only way to change authType

Updating Connection (PATCH)

Use a flat JSON body. This body can include name and description, plus any other properties for authentication Here's an example of how to update the above connection using PATCH. Including authType, typeId, or id in the body will cause the call to fail.

Copy
{
  "name": "newName",
  "username": "differentUser"
}

Jobs

Job returns currently contain more information that can be altered using the APIs. See the table below the example for fields that can be altered. Items like the name of the connections involved and timestamp data are read only.

Returned Object

Copy
 {
  "auditAllMappings": false,
  "chainJobWaitInterval": "",
  "contentServiceName": "fswiki",
  "queueCapacity": 1000,
  "repositoryTimeZone": "America/New_York",
  "outputTimeZone": "America/New_York",
  "outputSpec": {
    "outputRenditions": false,
    "id_attribute": "source_repository_id",
    "es_vectorlist": "",
    "elBatchSize": 20,
    "includedUnMapped": true,
    "index_name": "hang2"
  },
  "runTaskGroupAfter": false,
  "outputId": "5f2d5f6d4448c83a1373a26c",
  "emailNotify": "NONE",
  "repoThreadCount": 1,
  "tagsList": [],
  "postProcessorThreadsCount": 1,
  "modified": 1719523581281,
  "startTime": 1267906320000,
  "modifiedBy": "admin",
  "id": 1684339491099,
  "jobType": "SIMPLE_MIGRATION",
  "jobMappingId": 0,
  "tasks": [
    {
      "taskInstanceName": "Default Tika Text Extraction",
      "beanName": "tikaExtractorTask",
      "taskName": "Tika Text Extraction",
      "position": 0,
      "id": "08c13d5b-cf4d-4b74-96a5-3edccc13c881",
      "type": "PROCESSOR",
      "fields": [
        {
          "value": "false",
          "key": "use_condition"
        },
        {
          "value": "",
          "key": "task_condition"
        },
        {
          "value": "false",
          "key": "task_stop_proc"
        },
        {
          "value": "content",
          "key": "tejt_field_to_mark"
        },
        {
          "value": "0",
          "key": "tejt_max_length"
        },
        {
          "value": "",
          "key": "tejt_etp"
        },
        {
          "value": "true",
          "key": "tejt_fail_on_error"
        },
        {
          "value": "true",
          "key": "tejt_rm_bin"
        }
      ]
    }
  ],
  "eventConfigurations": [],
  "chainJob": 0,
  "taskThreadsCount": 5,
  "repoId": "5f34383a4448c83a1375eee4",
  "outputName": "Elastic Output",
  "historyRetention": "ALWAYS",
  "postProcessors": [],
  "recordAuditScope": "FAILED_DELETED_SKIPPED_WRITTEN",
  "includeBinary": true,
  "created": 1678133553269,
  "emailTo": "",
  "auditEmptyMapping": false,
  "mappings": [
    {
      "sourceType": "TEXT",
      "watch": false,
      "mappingType": "FIELD_MAPPING",
      "targetType": "TEXT",
      "source": "content",
      "position": 0,
      "target": "content"
    }
  ],
  "taskGroupId": 0,
  "maxErrors": 0,
  "createdBy": "admin",
  "name": "El Wiki2 (Hang)",
  "outputThreadCount": 5,
  "readSpec": {
    "paths": [
      {
        "path": "/Users/ryan/Documents/testfiles/wikis2",
        "converttouri": false,
        "fileconfig_processfolders": false,
        "includehidden": false,
        "includeempty": false
      }
    ]
  },
  "endTime": 1709755920000,
  "batchSize": 0,
  "mappingGroupId": 0
}

Creating a Job

The following body will create a job. This example is for a SharePointREST to Bulk Filesystem job. Note that outputSpec is an empty object here. This means the output spec fields will be automatically generated using default values. See the table below for information on each field, formatting requirements, and which endpoints can alter the field.

Copy
{
  "auditAllMappings": false,
  "auditEmptyMapping": false,
  "batchSize": 0,
  "chainJob": 0,
  "chainJobWaitInterval": "",
  "readSpec": {
    "acls": false,
    "getPermissions": false,
    "siteName": "sites/3SixtyTestSite",
    "getVersions": false,
    "listName": "",
    "dateTimeParser": "AU",
    "getFolders": false
  },
  "emailNotify": "NONE",
  "emailTo": "",
  "endTime": 1736363460000,
  "eventConfigurations": [],
  "historyRetention": "ALWAYS",
  "includeBinary": true,
  "jobMappingId": 0,
  "jobType": "SIMPLE_MIGRATION",
  "mappingGroupId": 0,
  "mappings": [
    {
      "source": "source1",
      "target": "target1",
      "mappingType": "FIELD_MAPPING",
      "targetType": "TEXT",
      "watch": false
    },
    {
      "source": "source2",
      "target": "target2",
      "mappingType": "ASPECT_MAPPING",
      "targetType": "DOUBLE",
      "watch": false
    }
  ],
  "maxErrors": 0,
  "name": "API TEST JOB",
  "outputId": "outputId",
  "outputSpec": {},
  "outputThreadCount": 1,
  "outputTimeZone": "America/New_York",
  "postProcessorThreadsCount": 1,
  "postProcessors": [],
  "queueCapacity": 1000,
  "recordAuditScope": "FAILED_DELETED_SKIPPED_WRITTEN",
  "repoId": "repoId",
  "repoThreadCount": 1,
  "repositoryTimeZone": "America/New_York",
  "runTaskGroupAfter": false,
  "startTime": 1546974660000,
  "taskGroupId": 0,
  "taskThreadsCount": 1,
  "tasks": [
    {
      "taskName": "Default Override Folder Path",
      "typeName": "overrideFolderPathTask",
      "fields": {
        "jt_folderpath_pattern": "'/'"
      }
    }
  ]
}
Key Description Type Format Notes
auditAllMappings All mappings will be audited as if their "watch" value was set to true boolean  
auditEmptyMapping Audited mappings will be skipped if they didn't end up producing a value. This will audit the final value as an empty string boolean  
batchSize Setting this value above 0 will automatically enable batching behavior. Documents will be assigned a batchId. Check documentation for whether the output connector supports batching. Required for Alfresco. integer max 1000
chainJob A job id of a job you want to immediately run after the completion of this job long jobid
chainJobWaitInterval How long to wait to run the chained job. Format is [integer][time unit], 1d is 1 Day, 2m is 2 minutes, 30s is 30 seconds. h,m,s,and d are supported. If not included with a chain job. The internal be 1h (1 hour) String  
readSpec The job specification for the reading connection.Sending a blank JSON Object will generate a default specification for the connection. For now a field list can be gathered using GET /api/v2/connectors/{typeId}/specification/{function}, where function is "read" or "write'. Can be updated sending a complete specification to **PUT /api/v2/jobs/{jobid}/specification/{function}**, or PATCH with JSON of keys and values JSONObject  
emailNotify If the email service is set up, can send emails to specified addresses. Values are NONE, ON_ERROR, ALWAYS. Default is NONE String capitalized
emailTo If emailNotify is not NONE, a comma delimited list of emails to notify. String comma delimited
endTime Most connectors only read documents within a specific time frame, configured on the job. This is the latest possible modified date and time to read in a document. Takes an epoch millisecond ex. 1546974660000 long epoch millisecond
eventConfigurations Only works on Event Jobs. A JSONArray of event configuration keys. Event configurations can be created in the UI or found at /api/integration/eventconfigurations JSONArray Strings
historyRetention How long to hold on to job audits for this job. The job history cleanup service runs daily and checks this value. Values are "ALWAYS", "WEEK", "MONTH", "QUARTER", "HALFYEAR", "YEAR". Default is ALWAYS String capitalized
includeBinary Whether or not to read document content boolean  
jobMappingId The id of an external set of job mappings long  
jobType Options are "SIMPLE_MIGRATION", "INCREMENTAL_MIGRATION", "EVENT" (MIP will be added later) String capitalized
mappingGroupId The id of an external mapping group long  
mappings A JSONArray of valid mapping JSON objects. See the mapping fields for the format JSONArray JSONObject
maxErrors How many errors before the job fails. Default is 0 integer max 10000
name The name of the job String  
outputId The guid of the output connection. Can only be changed as part of a PUT request String guid
outputSpec The specification for the output connection. Sending a blank JSON Object will generate a default specfication for the connection. For now a field list can be gathered using /api/v2/connectors/{typeId}/specification/{function}, where function is "read" or "write'. Can be updated sending a complete specification to PUT /api/v2/jobs/{jobid}/specification/{function}, or PATCH with JSON of keys and values JSONObject  
outputThreadCount How many worker threads to use for writing integer  
outputTimeZone A valid time zone string. See this list. https://docs.oracle.com/cd/E72987_01/wcs/tag-ref/MISC/TimeZones.html String  
postProcessorThreadsCount How many worker threads to use for post processing integer  
postProcessors A JSONArray of valid task JSON objects. See task fields for format and info JSONArray JSONObject
queueCapacity How many document's can be held in the queue before reading pauses integer min 100, max 5000
recordAuditScope What types of documents to audit during a run. Values are "FAILED_ONLY", "FAILED_AND_DELETED", "FAILED_SKIPPED", "FAILED_DELETED_AND_WRITTEN", "FAILED_DELETED_SKIPPED_WRITTEN", "ALL". Default is FAILED_DELETED_SKIPPED_WRITTEN String capitalized
repoId The guid of the read connection. Can only be changed as part of a PUT request String guid
repoThreadCount How many worker threads to use for reading. integer  
repositoryTimeZone A valid time zone string. See this list. https://docs.oracle.com/cd/E72987_01/wcs/tag-ref/MISC/TimeZones.html String  
runTaskGroupAfter If including a task group, whether to run the group before or after the job tasks. boolean  
startTime Most connectors only read documents within a specific time frame, configured on the job. This is the earliest possible modified date and time to read in a document. Takes an epoch millisecond ex. 1546974660000 long epoch millisecond
taskGroupId The id of a task group to use with the job. Currently available from GET /api/taskgrouprunner/list long  
taskThreadsCount How many worker threads to use for tasks JSONArray JSONObject
tasks A JSONArray of valid task JSON objects. See task fields for format and info    

Special Read Specification: Filesystem

The filesystem, ftp, and otcs connector have a unique structure to their read spec field. The field is called paths and takes a JSONArray of path objects.

Copy
{
  "readSpec": {
    "paths": [
      {
        "path": "/Users/user/Downloads",
        "converttouri": false,
        "fileconfig_processfolders": false,
        "includehidden": false,
        "includeempty": false
      },
      {
        "path": "/Users/user/Documents",
        "converttouri": false,
        "fileconfig_processfolders": false,
        "includehidden": false,
        "includeempty": false
      }
    ]
  }
}
Key Description
path The file path to read
converttouri Replace all backslashes with forward slashes
fileconfig_processfolders Process folders as if they were documents
includehidden Process hidden files
includeempty Process empty folders.

If updating a job specification using PATCH, the whole paths array must be replaced

Updating Whole Jobs (PUT)

There are four options when using PUT.

Endpoint Usage and Requirements
/api/v2/jobs/{{jobId}} Update the whole jobs, including tasks, mappings, and specifications.
/api/v2/jobs/{{jobId}}/mappings Replaces the mappings on a job using an JSONArray of mapping objects (see example as well as Mappings section)
/api/v2/jobs/{{jobId}}/tasks Replaces the tasks value on a job using an JSONArray of task objects (see example as well as Mappings section)
/api/v2/jobs/{{jobId}}/postprocessors Replaces the postprocessors value on a job using an JSONArray of task objects (see example as well as Mappings section)

Partially Updating a Job (PATCH)

There are option when using PATCH. All of these methods take a flat JSON with the keys to change and their new values.

Endpoint Usage and Requirements
/api/v2/jobs/{{jobId}}/config Updates basic job details. See Job Fields table for what can and cannot be altered by this endpoint.
jobs/{id}/specification/{function} Update the read or write specification. Takes flat JSON body of keys and values for the connector.
jobs/{id}/tasks/{taskId} Update a specific task using the taskId found through GET /jobs/{id} or GET /jobs/{id}/tasks
jobs/{id}/postprocessors/{taskId} Update a specific task using the taskId found through GET /jobs/{id} or GET /jobs/{id}/postprocessors

Tasks

Tasks are currently only associated with jobs. Much like authentication connections, when creating them, there are required fields based on the type of task. Here is the returned body for the text extraction task above, using GET /api/v2/tasks/tikaExtractorTask

Copy
{
  "name": "Tika Text Extraction",
  "typeId": "tikaExtractorTask",
  "fields": [
    {
      "name": "Check a condition before executing this task.",
      "description": "",
      "id": "use_condition",
      "type": "CHECKBOX"
    },
    {
      "dependsOn": "use_condition",
      "name": "Condition",
      "description": "It will execute the task when the condition's result is 'true', 't', 'on', '1', or 'yes' (case-insensitive), 
      or run on all conditions if left empty.\n This condition is evaluated for each document, 
      determining whether the task should be executed based on the specified values.",
      "id": "task_condition",
      "type": "TEXT"
    },
    {
      "dependsOn": "use_condition",
      "name": "Stop Processing",
      "description": "If the condition results in 'true', 't', 'on', '1', or 'yes' (case-insensitive), 
      no additional tasks will be executed for the current document being processed.",
      "id": "task_stop_proc",
      "type": "CHECKBOX"
    },
    {
      "name": "Tika Content Field",
      "description": "",
      "id": "tejt_field_to_mark",
      "type": "TEXT"
    },
    {
      "name": "Max Content length (B)",
      "description": "Do not process documents over this size. 0 to process all documents.",
      "maximum": 0,
      "id": "tejt_max_length",
      "type": "LONG",
      "minimum": 0
    },
    {
      "name": "File Extensions to Extract",
      "description": "Comma delimited list of file extensions to process or leave blank to process all.",
      "id": "tejt_etp",
      "type": "TEXT"
    },
    {
      "name": "Fail Document on Extraction Error",
      "description": "Fail the Document if there is an Extraction Error during processing",
      "id": "tejt_fail_on_error",
      "type": "CHECKBOX"
    },
    {
      "name": "Remove content after extraction",
      "description": "Removes the content after the Tika Extraction",
      "id": "tejt_rm_bin",
      "type": "CHECKBOX"
    }
  ]
}

Optional Fields

It should be noted that the first three fields use_condition, task_condition, and task_stop_proc are optional. They appear on all tasks. If they are not included in a task body, they will set to their default values (false, blank, and false respectively). This is why, in the above job body the folder path override task did not contain those fields. Here's the return value of /api/v2/tasks/overrideFolderPathTask without the extra fields.

Key Description Type
use_condition Whether to check a condition before running the task. See our docs for more details boolean
task_condition The expression used to determine whether to run the task. String
task_stop_proc Prevents other tasks from being executed if the condition is met. boolean
Copy
{
  "name": "Override Folder Path",
  "typeId": "overrideFolderPathTask",
  "fields": [
    {
      "name": "Pattern",
      "description": "",
      "id": "jt_folderpath_pattern",
      "type": "TEXT"
    }
  ]
}

and here's the value passed to the job

Copy
{
  "taskName": "Default Override Folder Path",
  "taskTypeId": "overrideFolderPathTask",
  "fields": {
    "jt_folderpath_pattern": "'/'"
  }
}

Required Fields

Key Description
taskName What you want to name the task
taskTypeId The id for the type of task. Use the tasks apis to see what tasks are available
fields A JSON object of keys and values for the required fields

Post Processors

Post processors are tasks that run after writing is complete. They function almost identically to tasks, but many of them task authenticationIds in order to perform their operations. In order to find out what post processors a connector supports, use /api/v2/connectors/{{typeId}}/postprocessors/list. There aren't currently many connectors that support them. They can have the following fields

Key Description
authConnField Takes the authenticationId of a Connection. The method will return valid connection ids as part of its options
runForEachField Whether the post processors will run for each document or just once, before post processing begins. Values are boolean strings "true" or "false"

Mappings

Mappings are currently only associated with jobs. Here is the example mapping from the job above

Copy
[
  {
    "source": "source1",
    "target": "target1",
    "mappingType": "FIELD_MAPPING",
    "targetType": "TEXT",
    "watch": false
  },
  {
    "source": "source2",
    "target": "target2",
    "mappingType": "ASPECT_MAPPING",
    "targetType": "DOUBLE",
    "watch": false
  }
]

All of these fields are required to make a mapping.

Key Description
source The source field on the document
target The field to write the value to
mappingType The type of mapping this is. Options are FIELD_MAPPING, ASPECT_MAPPING, TYPE_MAPPING, and CALCULATED_FIELD. Seeour docsfor more info on what values of these fields should be
targetType What type of data to convert the source value into when writing
watch Whether or not audit the output of this mapping.

Full Example

In this example we will follow the steps and calls in order to create a job that reads from a filesystem and writes to SharePoint. This example will not include the various GET calls requires to gather the needed information. See the swagger docs for a list of these methods.

Create Connections

Creating a filesystem connection

Use POST /api/v2/jobs

Copy
{
  "name": "fstest",
  "description": "fs test connection",
  "typeId": "filesystem",
  "authType": "none",
  "properties": {}
}

and receive the following. The results are the new id of the connection.

Copy
{
  "success": true,
  "results": "67587c548c91c03d754e3feb"
}

Creating an Alfresco connection\ Use POST /api/v2/jobs

Copy
{
  "name": "SPREST API",
  "description": "SPREST connection",
  "typeId": "sharePointREST",
  "authType": "basic",
  "properties": {
    "username": "user",
    "password": "password",
    "supac_s": "(your sharepoint url)"
  }
}

and receive the following

Copy
{
  "success": true,
  "results": "67587cd48c91c03d754e3fee"
}

Create a Job

Using these connections

Use POST /api/v2/jobs

Copy
{
  "auditAllMappings": false,
  "auditEmptyMapping": false,
  "batchSize": 0,
  "chainJob": 0,
  "chainJobWaitInterval": "",
  "readSpec": {
    "paths": [
      {
        "path": "/Users/user/Downloads",
        "converttouri": false,
        "fileconfig_processfolders": false,
        "includehidden": false,
        "includeempty": false
      }
    ]
  },
  "emailNotify": "NONE",
  "emailTo": "",
  "endTime": 1736363460000,
  "eventConfigurations": [],
  "historyRetention": "ALWAYS",
  "includeBinary": true,
  "jobMappingId": 0,
  "jobType": "SIMPLE_MIGRATION",
  "mappingGroupId": 0,
  "mappings": [],
  "maxErrors": 0,
  "name": "API TEST JOB",
  "outputId": "67587cd48c91c03d754e3fee",
  "outputSpec": {},
  "outputThreadCount": 1,
  "outputTimeZone": "America/New_York",
  "postProcessorThreadsCount": 1,
  "postProcessors": [],
  "queueCapacity": 1000,
  "recordAuditScope": "FAILED_DELETED_SKIPPED_WRITTEN",
  "repoId": "67587c548c91c03d754e3feb",
  "repoThreadCount": 1,
  "repositoryTimeZone": "America/New_York",
  "runTaskGroupAfter": false,
  "startTime": 1546974660000,
  "taskGroupId": 0,
  "taskThreadsCount": 1,
  "tasks": []

And receive the following. Results for creation and updates will be the id of the item that was updated.

Copy
{
  "success": true,
  "results": 1715353292430
}

Changing basic job config

Say we want to up the number of writer threads and the job name. For that we would use

PATCH /api/v2/jobs/1715353292430/config

Copy
{
  "outputThreadCount": 10,
  "name": "API TEST JOB 2"
}

and receive

Copy
{
  "success": true,
  "results": 1715353292430
}

Updating a job specification

In this case we're strategically leaving the outputSpec blank, as we are fine with most default values for the SharePoint connector. In this case we will only set the site name and output folder using the following request

PATCH /api/v2/jobs/1715353292430/specification/write

Copy
{
  "siteNameOut": "sites/Dev",
  "outputfolderpath": "/outputfolder"
}

and receive

Copy
{
  "success": true,
  "results": 1715353292430
}

Adding a task

3Sixty will always build the entire folder path for a document inside the target folder. If we want those file paths truncated for easier browsing, we can add an override task with the following

PUT /api/v2/jobs/1715353292430/tasks

Copy
[
  {
    "taskName": "Override Folder Path",
    "typeName": "overrideFolderPathTask",
    "fields": {
      "jt_folderpath_pattern": "'/'"
    }
  }
]

and receive

Copy
{
  "success": true,
  "results": 1715353292430
}

Changing a task

For this we'll say made a mistake, and we want the overridden path on the to be "/migration/". First, we would need to call

GET /api/v2/jobs/1715353292430/tasks

and retrieve the id value of the task in question. In this case we'll use the example 10e10112-7b5f-4087-a613-3492b119e95f.

Then, we would call

PATCH /api/v2/jobs/1715353292430/tasks/10e10112-7b5f-4087-a613-3492b119e95f

Copy
{
  "jt_folderpath_pattern": "/migration/"
}

Making the GET call again should show an updated task.

Adding mappings

Now, finally, we realized we want to add the original created date from the filesystem to a field called MetaDate. \ In SharePoint, this field only appears on the Content Type MetaDocument. So we'll need both a type mapping and a calculated field mapping

PUT /api/v2/jobs/1715353292430/mappings

Copy
[
  {
    "source": "Document",//Our default document type
    "target": "MetaDocument",//The content type in sharepoint
    "mappingType": "TYPE_MAPPING",
    "targetType": "STRING",//Type mappings don't care about this field, but its still required
    "watch": false
  },
  {
    "source": "'#{rd.createddate}'",
    "target": "MetaDate",
    "mappingType": "CALCULATED_FIELD",
    "targetType": "DATETIME",
    "watch": false
  }
]

Making the GET call again should show an updated mappings.