Export Connector
(Previously known as the BFS Connector)
The Export Connector is useful for outputting files and their metadata as separate entities. File binaries will be output with their metadata in a file with the format. It can also read in these files for export to other systems.
Connector Compatibility |
|
|
|
|
---|---|---|---|---|
Repo: Yes |
Output: Yes |
Content Service: No |
Content Search: No |
Manage In Place: No |
Integration Connection
Every job requires an integration connection for both the source repository connection and the output connection. Also known as input an output connections. Their job is to query or crawl remote systems for files, folders, metadata, versions, and renditions. In repo mode, it will retrieve list items and all of their relevant metadata from a list or library on the specified site. In output mode, the connection will write content and assign the mapped content type (from type mappings), or simply leave the new list item as a Document. Click here for more information on setting up an integration connection.

-
Select integration and click on the new connection button.
-
Enter the name and description of your connection.
-
Select the connection type from the drop down list.
-
Click Save on the Create Connection form.
-
Click Save on the Edit Connection page.
There are no fields to configure in an integration connection.
Guide to Integration connections
Job Configuration
A 3Sixty Job is the process of moving or syncing content (including versions, ACL's, metadata) from one CMS (content management system) to another. Add tasks to your job to have better control over how your data gets migrated. Click here for details on how to set up an integration job.

-
Select List Jobs under Jobs on the navigation menu or the dashboard
-
Click the create job button
-
In the New Job form
-
Name the job
-
Select Simple Migration from the job type drop down
-
Select the Repository Connection
-
Select the Output connection
-
Select your Content Service Connection
(Only required if you will be using Federation)
-
Click Save to open the Edit Job page
-
Fill in the configurations for the Repo and Output Configuration tabs
-
-
Click save to save your new integration job

-
Select Run and Monitor Jobs under Integration in the navigation menu
-
Click the play button next to the job you want to run
-
Click the refresh button to view your completed job status
(Larger jobs will take longer to run)
Repository Specification
Also known as an input connection. It's job is to query or crawl remote systems for files, folders, metadata, versions, and renditions. When using this connector as a source repository filling out the following configuration fields will tell 3Sixty how to locate the files you want migrated.
Field |
Description |
---|---|
Source Directory |
The directory to begin crawling for BFS files. |
Do not convert metadata |
3Sixty converts all type and field values to lowercase by default. If this is checked all fields will keep their original case. |
Process Folders |
Tells the job to process to folders. If checked and the job is rerun for errors, folders will be processed again. |
Process Files |
Tells the job to process files. Check by default. |
Check for multi-valued fields |
Will check for commas in field values. If present, they will be added as multivalued fields to the metadata. |
Output Specifications
When using this connector as a file destination, filling out the following fields will tell 3Sixty where you want the files integrated to.
Select one of the following export types. BFS or Redact
BFS Export
Field |
Description |
---|---|
Output Folder Path |
The output directory location where your BFS files will be stored |
Multi-Value Separator |
Multi-value fields will be combined into a list using this separator |
Output Metadata as XML |
Creates a metadata XML file. If not selected, metadata will be stored in a properties file |
Include Un-Mapped Properties |
If selected, all available properties will be included in the metadata output file. If not selected, only mapped properties will be included in file |
Inherit ACLs |
If selected, inherited ACL properties will be included in the metadata output file. |
Zip Output |
If selected, output will be created as zip files. This option can only be used with batch migrations (i.e. batch size must be greater than 0). |
Include Aspects With No Field Mappings |
Check this box to Include Aspects With No Field Mappings |
Aspect Remove Field Mapping |
Takes a JSON string. Remove aspects if the listed fields are not present. The example of the UI: {"myaspect:two":["field1","field2"],"myaspect:one":["field1","field2"]} Meaning that if field 1 or field 2 is not present, do not add the aspects. |
Date Format |
Date mappings will be converted to this ISO format |
Date Time Format |
DateTme mappings will be converted to this ISO format |
Redact Export
Field |
Description |
---|---|
Output Folder Path |
The output directory location where your BFS files will be stored |
Redaction Field |
The field that contains the redaction JSON. Should match the task supplying the redaction data. See the Tutorial for step-by- step instructions. |
Tutorial: Redact Output Job
This tutorial guides you through setting up a job to read from a source and use generative AI to extract sensitive terms for redaction.
Step 1: Set Up Your Repository Connection
-
Refer to your source's documentation for instructions on how to set up the repository connection.
Step 2: Create an Export Connector
-
Navigate to Connections > Integration.
-
Click on +Create Integration Connection.
-
Name your connection.
-
Under Connection Type, search for and select Export Connector.
-
Save the connection.
Step 3: Create a Job with Your Source and New Export Connection
-
Refer to your source's documentation for job configuration details.
Step 4: Output Configuration
-
Go to the export connection tab.
-
Set your output folder.
-
Change the export type to Redact Export.
-
Choose the name of the field where the redaction terms will be stored. Use this field name as the result field name for the AI Content Enrichment Task in Step 6
Step 5: Add Tasks in the Following Order
Task 1: Tika Text Extract
-
Use default values.
Task 2: AI Content Enrichment Task
-
Set Result Field to the field name chosen in Step 5.4
-
Under Advanced Options
-
Set tokens to 2000 for the example prompt.
-
If your responses are coming back as partial JSONS, causing documents to fail, then increase this value.
-
If increasing the number of terms to search for in your prompt, make sure to raise this value as well.
-
-
Refer to the documentation for the AI Content Enrichment task for more information on the advanced options.
-
Note: Here are two example System Prompts. The first is more verbose (uses more tokens) but produces a high volume of results. The second option will require fewer tokens but produces a much weaker result set. Remember that single quotes around the prompt are required.
Note:
User Prompt: Single quotes and the prompt are required
‘
This is the content to search for sensitive information:
#{field.content}
‘
Example 1

'
You are a redaction assistant that extracts all requested terms from the given text.
You must respond only in JSON format. The response JSON will be used by a redaction system, hence we call it the redaction JSON.
Each term item must have "target_text", "code", "code_description","case_sensitive", "wholeword" Use the following codes and their descriptions when creating terms.
These are the types of information you must find:
• ADDRESS: A full address
• PERSON: Any person name
• ACCOUNT: Banking or government institution account
• GROUP: The name of a group or organization
• EMAIL: Email Address
• LINK: Hyperlink with non-top-level extension
• DOB: Date of birth
• FINANCE: Bank accounts, salary details, etc.
• LOCATION: Proper names of locations
Each array item is a JSONObject. The array can be empty.
Here are some examples: Example 1:
Input: John Peterson, born on 12/05/1982, lives at 45 Queen Street, Melbourne, VIC 3000, and was recently diagnosed with hypertension.
He maintains a Commonwealth Bank account number 77889933, earns a monthly salary of $7,200, and is a member of the Rotary Club of Melbourne.
For correspondence, you can reach him via john.peterson@samplemail.org, and he often uploads documents at http://archive.docs/internal. Last month, he traveled to Bondi Beach in Sydney with his colleagues.
Expected output:
{
"terms": [
{
"target_text": "John Peterson",
"code": "PERSON",
"code_description": "Name of any person",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "12/05/1982",
"code": "DOB",
"code_description": "Date of birth",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "45 Queen Street, Melbourne, VIC 3000",
"code": "ADDRESS",
"code_description": "A full address",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "77889933",
"code": "ACCOUNT",
"code_description": "Banking or government institution account",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "$7,200",
"code": "FINANCE",
"code_description": "Bank accounts, salary details, etc.",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "Rotary Club of Melbourne",
"code": "GROUP",
"code_description": "The name of a group or organization",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "john.peterson@samplemail.org",
"code": "EMAIL",
"code_description": "Email Address",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "http://archive.docs/internal",
"code": "LINK",
"code_description": "Hyperlink with non-top-level extension",
"case_sensitive": "false",
"wholeword": "false"
},
{
"target_text": "Bondi Beach",
"code": "LOCATION",
"code_description": "Proper names of locations",
"case_sensitive": "false",
"wholeword": "false"
}
]
}
Using the instructions and examples above, create a redaction JSON for the following text.
The text is extracted from a document and can be long.
'
Example 2

‘
You are a redaction assistant that extracts all requested terms from the given text.
You respond only in JSON format with a single JSON array named "terms".
Each array item is a JSONObject. The array can be empty.
This is an example JSON with comments on what each field should be:
Example
{
"target_text" : "Paul Rudd", //The sensitive information you found
"code" : "PERSON", //The code that will be assigned to this type of information from this prompt
"code_description": "The name of a person", //A description for that code from this prompt
"case_sensitive":"false", //Whether the analysis was case-sensitive
"wholeword":"false" //Whether whole word analysis was used
}
Use the following codes and their descriptions when creating terms.
These are the types of information you must find:
- ADDRESS: A full address
- PERSON: Any person name
- ACCOUNT: Banking or government institution account
- GROUP: The name of a group or organization
- EMAIL: Email Address
- LINK: Hyperlink with non-top-level extension
- DOB: Date of birth
- FINANCE: Bank accounts, salary details, etc.
- HEALTH: Conditions that can be linked to a person
- LOCATION: Proper names of locations
‘
Note: Make sure to include the single quote (') at the beginning and end of the system and user prompt
Need help using the BFS conector? We can help.