JDBC
Java Database Connectivity is an application programming interface for the programming language Java, which defines how a client may access a database. It is a Java-based data access technology used for Java database connectivity. It is part of the Java Standard Edition platform, from Oracle Corporation.
Java Tutorial: JDBC Basics
Click the following link to see an example of this set up process using JDBC as the repository connector and BFS as the Output connector. JDBC to BFS Oracle Integration
Authentication Connection
This connector works for any Database that accepts JDBC connections. It requires knowledge of which JDBC Driver you will be using, as well as the specific connection parameters to your JDBC Instance. These can vary greatly by system. Click here for more information on setting up an Authentication Connection.
JDBC Authentication Connection Configuration
-
Name: Unique name for this auth connector.
-
Username: The Username of the user to authenticate with to the JDBC DB Instance.
-
Password: The password associated with the above username.
-
JDBC URL: The Full JDBC URL to connect to. Required.
-
JDBC Driver: The JDBC Driver to use, based on the DB you are connecting to.
-
Use the table below to find and retrieve the jar file for your database, and place it in the tomcat/lib folder
-
Note:
JDBC is a widely used protocol, and has not been tested with every system on this list.

DBMS | Driver class | Library name |
---|---|---|
PostgreSQL | org.postgresql.Driver | postgresql-42.2.8.jar (exact name depends on the driver version) https://jdbc.postgresql.org/download.html |
Firebird SQL | org.firebirdsql.jdbc.FBDriver | firebirdsql-full.jar http://www.firebirdsql.org/ |
H2 Database Engine | org.h2.Driver | h2.jar http://www.h2database.com |
HSQLDB | org.hsqldb.jdbcDriver | hsqldb.jar http://hsqldb.sourceforge.net |
Apache Derby | org.apache.derby.jdbc.EmbeddedDriver | derby.jar http://db.apache.org/derby/ |
IBM DB2 | com.ibm.db2.jcc.DB2Driver | db2jcc4.jar https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads |
IBM DB2 for iSeries | com.ibm.as400.access.AS400JDBCDriver | jt400.jar http://jt400.sourceforge.net/ |
Teradata | com.teradata.jdbc.TeraDriver | terajdbc4.jar http://www.teradata.com/DownloadCenter/Forum158-1.aspx |
SQL Server (Microsoft driver) | com.microsoft.sqlserver.jdbc.SQLServerDriver | mssql-jdbc-6.2.2.jre8.jar (exact name depends on the driver version) https://github.com/Microsoft/mssql-jdbc |
Oracle | oracle.jdbc.OracleDriver | ojdbc8.jar http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html |
MariaDB | org.mariadb.jdbc.Driver | mariadb-java-client-1.3.6.jar (exact name depends on the driver version) https://downloads.mariadb.org/connector-java/ |
MySQL | com.mysql.jdbc.Driver | mysql-connector-java-5.1.36-bin.jar (exact name depends on the driver version) http://www.mysql.com/downloads/connector/j/ |
Discovery Connector
JDBC Discovery Connector Configurations
-
Name: A name for the Discovery Connection to identify it in the UI.
-
Authentication Connection: JDBC Authentication Connector
-
Ignore Types: Comma delimited list of types to ignore. Note that you can have regex as well. So to ignore all types with "workflow" in the name, you would enter(.)workflow(.) into the ignore types textbox.
-
Table Types: Types of tables delimited by comma to retrieve. Normal values are TABLE,VIEW,SYSTEM TABLE,GLOBAL TEMPORARY,LOCAL TEMPORARY,ALIAS,SYNONYM. The default value and the one used mostly is TABLE
-
Driver: JDBC Driver class (i.e.com.mysql.jdbc.Driver)
-
Catalogue: Enter text to narrow results or leave blank to not use Catalogue in Discovery.
-
Schema Pattern: Must match the schema name as it is stored in the database. Enter a single space to discover those without a schema. Leave blank if you do not want to use the schema name to narrow the search.
-
Table Name Pattern: Enter % to search for all tables or enter text to match against.
JDBC Integration Connection
Your JDBC Integration connection allows 3Sixty to input and output data to and from your JDBC repository.
The Output Connection will push records, metadata, and permissions into the target repository. They also handle deleting content in sync mode. These connectors output files, folders, metadata, versions and renditions to the output system associated with this connector. Every Output Connector can define custom fields that it needs to be able to connect and write to the repository. These fields are defined during the development of the individual connectors.
The Repository Connector allows organisations to read from Repositories using a Java Database Connectivity (JDBC) technology. This means that using 3Sixty and your JDBC Driver of choice, you can connect to many repositories to retrieve data and content.
Click here for more information on setting up an integration connection.
JDBC Integration Connection Configuration
-
Connection Name: This is a unique name given to the connector instance upon creation.
-
Description: A description of the connector to help identify it.
-
Authentication Connection: Your JDBC Auth connection
JDBC Job Configuration
Currently, there are two processors included out of the box:
-
Default JDBC Processor
-
Failed ID Processor
JDBC Processor Config
The JDBC Connector uses different processors which process results in different ways. Some fields will only function with certain processors.
-
SQL Query: The SQL Query to run.
-
Failed ID: Only works for the Failed ID JDBC Processor (See below)
-
ID Field: The field to use as the source repository id
-
Fetch Size: Refers to how many rows to fetch from the data source at one time. Default is 10.
-
Input Directory: This is for processors where metadata is in the database but the binary content files are on the filesystem. This would be the root directory of the files. You can leave blank if not needed.
-
Output Directory: Some processors needs a place to safely store files while doing conversions and processing. This is such a place. You can leave blank if not needed.
-
Internal Queue Size: Records to queue before halting read from the data source. Once records fall below the internal queue size, reading will resume.
-
Include Content: Check to include content or un-check to exclude content and only retrieve metadata.
-
Processor Class: The Processor Class is a registered Spring Bean and is used to process the ResultSet.
Default JDBC Processor
The default processor runs a query and will convert the table rows into processable documents. It does not use the start and end times configured in the Details tab. In order to filter on date fields, you will need your own WHERE clause.
Failed ID Processor
The Failed ID processor is used for running errors. It is run for each failed document ID and offers the following parameter to insert the id into your failed ids query.
${DOC_ID}
So an example of a failed ID query would be
SELECT * FROM MYTABLE WHERE OBJECTID = '${DOC_ID}'
Output Specification
-
JDBC Output Table Name: This field is the name of the table to output items to. Required if not using Auto Create (If not set, and Auto Create is used. The Job Name will be used instead).
-
JDBC Output Key: The unique key to find/set/update in the table. Required if not using Auto Create.
-
Catalogue: The Catalogue (Or Domain) of the JDBC Instance to connect to. Required if not using Auto Create.
-
Schema: The Schema (Or Database/NameSpace) of the JDBC Instance to connect to. Require for Output.
-
Binary Column Name: The name of the column where document binaries will be sent to if "Include Binaries" is checked in the job configuration.
-
Auto Create: See below
-
Quote Identifier: If there are spaces in your table names, catalog names, schema names or column names, enter a quote identifier. For example, SQL Server and Oracle use a double quote ". MySQL uses a backtick `
Important: AUTO CREATE
Use extreme caution when using this field. The Default for this field is False. If Checked it will only use the Table Name, Schema Name, and Binary Column Name fields from this tab.
Content Service Connection
This section covers the JDBC specific configuration of the Content Service Connector. For a description of how to set up a content services connector generically see Content Service Connectors.
Configuration Fields
This section covers the JDBC specific configuration of the Content Service Connector.
-
JDBC Table Name: Database table name.
-
Schema: Database schema name.
-
Catalogue: Database catalog name.
-
ID Field: The primary key column to use as the unique ID (expecting an auto generated (i.e. auto increment) field.
-
Name Property: The column to use as the "document name". This field should not be left blank.
-
Created Date Property: The column to use as the "document created date". This field should not be left blank.
-
Modified Date Property: he column to use as the "document modified date". This field should not be left blank.
-
Binary Property: The column to use as the file content. This field can be left blank.
-
Root Version Id Property: If the entities in the table track versions, this should provide the column that contains the root version id. This field can be left blank.
-
Is Latest Version Property: This field is only used if the Root Version Id Field is populated. This field can be left blank only if the Root Version Id Field is also blank. If the entities in the table track versions, this should provide the column that contains whether the version is the latest (as a boolean).
-
Major Version Label Property: This field is only used if the Root Version Id Field is populated. This field can be left blank. If the entities in the table track versions, this should provide the column that contains the major version portion of a version label.
-
Minor Version Label Property: This field is only used if the Root Version Id Field is populated. This field can be left blank. If the entities in the table track versions, this should provide the column that contains the major version portion of a version label.
-
Quote Identifier Property: If there are spaces in your table names, catalog names, schema names or column names, enter a quote identifier. For example, SQL Server and Oracle use a double quote ". MySQL uses a backtick `
Supported Methods
-
createFile
-
deleteObjectByID
-
findTypeDefinition
-
getFileContent
-
getObjectProperties
-
updateFile
-
updateProperties
Advanced Topic: Connecting to a Microsoft Access Database
To use an Access database you'll need a JDBC Driver here
You will need to install the jars in the 3sixty-admin web app. This post tells you the jars you will need:
Place them into 3sixty-admin/WEB-INF/lib
After installing the jars you should be able to start 3Sixty and use the JDBC connector as per usual.

This process will walk you through setting up an integration from a JDBC Repository to a BFS Output system.
1. Create a JDBC Auth Connector:
For Oracle there are a couple of things to note:
-
The username must be all caps
-
The SID/Schema must be all caps as well
-
The username can be different from the Schema name, but obviously needs permissions to that Schema.
-
Note the Driver class
2. Create a JDBC Discovery Connector:
Set the authentication connector to be the auth connector created above. For Oracle, the above will get all objects in your database.
Trouble Shooting:
Remove any ojdbc jar file from the 3Sixty-admin/WEB-INF/lib directory and put in the ojdbc jar for your Oracle version. For Oracle 12c as an example that would be ojdbc8.jar.
The schema pattern should be set to the schema you want. Oracle seems to hang if you leave that field blank.
Table types can be a comma delimited list of the following:
-
TABLE
-
VIEW
-
SYSTEM TABLE
-
GLOBAL TEMPORARY
-
LOCAL TEMPORARY
-
ALIAS
-
SYNONYM
3. Create an integration connector for JDBC:
4. Create an integration connector for BFS. NOTE: BFS does not have an authentication connector.
5. Create a job with the repo connector as your jdbc integration connector and the output connector as your BFS integration connector:
6. Edit the BFS Tab:
Chose your output folder path that must exist ahead of time. Check the Include Un-Mapped Properties check box. Leave the rest as the default like the above picture.
7. Edit the JDBC Tab:
Some things to note about this configuration:
-
You should test your query using SQL Developer. This will allow you to get your query correct before testing it in 3Sixty
-
Notice how ABC_PLAN_id is in quotes in the query? That's because in this instance that field won't work unless its in quotes. Both SQL Developer and 3Sixty will throw and error.
-
The ID Field must be unique and is used as the file name. See the image below to see how this shows up in BFS Output
-
The query should list the fields you want. Select * is not intended to work.
-
The query cannot end with a ; or it will fail.
-
Fetch Size can be modified to improve performance
This is what the BFS Output looks like for this query. It returns one row, and the id was the number 1.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>--No Comment---</comment>
<entry key="ABC_PLAN_id">1</entry>
<entry key="folderpath"></entry>
<entry key="PLAN_YEAR_NBR">12.45</entry>
<entry key="type">cm:content</entry>
<entry key="PLAN_YEAR">2004</entry>
<entry key="separator">,</entry>
</properties>
The above xml is an example of the properties that were output for this job.
You'll notice the fields come over as is with no prefix. PLAN_YEAR won't have a type associated with it. When you see this, you can fix it by modifying your query like the following:
select "ABC_PLAN_id" as "mytype.abc_plan_id", plan_year as "mytype.plan_year", plan_year_nbr as "mytype.plan_year_nbr" from abc_plan
The new query results in a xml file that looks like the following:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>--No Comment---</comment>
<entry key="folderpath"></entry>
<entry key="mytype.plan_year_nbr">12.45</entry>
<entry key="mytype.abc_plan_id">1</entry>
<entry key="type">cm:content</entry>
<entry key="mytype.plan_year">2004</entry>
<entry key="separator">,</entry>
</properties>
But the file is named: default_document_name.doc.metadata.properties.xml
This is because we changed the ID Field in the query, but not in the ID Field in the form. Now if we update it like the following:
You'll see the file name is back to: 1.metadata.properties.xml
Now that you have this working with BFS you can create your mappings to match what you want in the output system. You can map mytype in this case to the output type you want. You can then map the mytype fields to the fields of the new type.
8. Run the JDBC to BFS Job to finish the integration process
API Keys
JDBC Connector: Read=true: Write=true: MIP=false
Repo (Read) Specs
Key |
Description |
Data Type |
---|---|---|
processorclass | Processor Class | String |
query |
SQL Query |
String |
fquery |
Failed ID Query |
String |
idField |
ID Field |
String |
fetchsize |
Fetch Size |
String |
inputdirectory |
Input Directory |
String |
outputdirectory |
Output Directory |
String |
internalqueuesize |
Internal Queue Size |
String |
process_content |
Include Content |
Boolean |
minPoolSize |
Minimum connection pool size |
Long |
Output (Write) Specs
Key |
Description |
Data Type |
---|---|---|
jdbc_table | JDBC Table Name | String |
jdbc_key |
JDBC Output Key |
String |
jdbc_catalog |
Catalog |
String |
jdbc_schema |
Schema |
String |
jdbc_binary_column_name |
Binary Column Name |
String |
jdbc_datetime_format |
JDBC Date Time Format |
String |
jdbc_auto_create |
Auto Create |
Boolean |
quoteIdentifier |
Quote Identifier Property |
String |
minPoolSize |
Minimum connection pool size |
Long |
Need to move or manage database content? See how we can help.