11.2.x Release Notes
INFO
Within these release notes Datameer X provides information and resolutions on improvements, bug fixes, and visual changes for 11.1.x maintenance release versions.
Visit Datameer's Help Center for additional information or contact your customer service representative.
Datameer X 11.2.14
Improvements
1 | Update 3rd party libraries known to have security vulnerabilities or performance issues | DAP-41554 |
---|---|---|
Upgrade jasypt to 1.9.3 & delete unused hibernate-jasypt bridge code |
2 | Remove Google Fusion tables from `plugin-webservice | DAP-42616 |
---|---|---|
Google Fusion Tables support has been removed from the code base as this is not available from Google anymore. |
Bug Fixes
1 | Plug-in Tableau: failures with `ClassNotFoundException: com.fasterxml.jackson.jaxrs.base.ProviderBase` on Hadoop 3.3 based distributions | DAP-42615 |
---|---|---|
Plug-in Tableau now can resolve com.fasterxml.jackson.jaxrs dependencies. | Bug fixed. |
2 | REST endpoint `/rest/data/import-job/ ` throws NPE for datalinks | DAP-42614 |
---|---|---|
The REST endpoint now resolves datalinks. | Bug fixed. |
Datameer X 11.2.13
Improvements
1 | Update 3rd party libraries known to have security vulnerabilities or performance issues | DAP-42598 |
---|---|---|
The following 3rd party libraries were updated to newer versions (or got removed):
|
2 | Certify RedHat 8 with Datameer X | DAP-42595 |
---|---|---|
Successfully verified Datameer X can operate on RedHat 8 as operating system. |
3 | Certify RedHat 9 with Datameer X | DAP-42599 |
---|---|---|
Successfully verified Datameer X can operate on RedHat 9 as operating system. |
4 | Add support for Amazon AWS EMR 7.0.0 | DAP-42601 |
---|---|---|
Datameer now supports Amazon AWS EMR version 7.0.0. |
5 | Remove Datameer's ADLS Gen1 connector | DAP-42596 |
---|---|---|
ADLS Gen1 will be discontinued by 2024.02.29. Datameer's connector got removed from the application therefore. |
Bug Fixes
1 | Script "update_paths.sh" fails with HibernateException: "Unable to deserialize JSON due to PluginRegistry not being set" | DAP-42594 |
---|---|---|
The "update_paths.sh" script now updates the information in Datameer's metadata database as expected. | Bug fixed. |
Datameer X 11.2.12
Improvements
1 | Add Support for Cloudera Private Cloud / Cloudera Runtime 7.1.9.0 | DAP-42564 |
---|---|---|
Datameer now supports Cloudera Private Cloud / Cloudera Runtime 7.1.9.0 |
2 | Add support for Amazon AWS EMR 6.10.0 | DAP-42587 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.10.0. |
3 | Add support for Amazon AWS EMR 6.11.0 | DAP-42586 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.11.0. |
4 | Add support for Amazon AWS EMR 6.12.0 | DAP-42585 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.12.0. |
5 | Add support for Amazon AWS EMR 6.13.0 | DAP-42584 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.13.0. |
6 | Add support for Amazon AWS EMR 6.15.0 | DAP-42581 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.15.0. |
Bug Fixes
1 | Joins on BigDecimal keys return different results when Map side vs Reduce side join is used | DAP-42574 |
---|---|---|
BigDecimal values of different scales are considered equal for JOIN and GROUPBY operations. | Bug fixed. |
Datameer X 11.2.11
Improvements
1 | Supported Hadoop Distributions: Add support for Amazon AWS EMR 6.14.0 | DAP-42564 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.14.0. |
2 | Security: Upgrade JSch Library for SFPT/SCP connections | DAP-42550 |
---|---|---|
The Java SSH implementation has been updated to fulfill the latest security recommendations. |
Bug Fixes
1 | Joins on BigDecimal keys return different results when Map side vs Reduce side join is used | DAP-42574 |
---|---|---|
BigDecimal values of different scales are considered equal for JOIN and GROUPBY operations. | Bug fixed. |
Datameer X 11.2.10
Improvements
1 | Supported Hadoop Distributions: Add support for Amazon AWS EMR 6.9.0 | DAP-42563 |
---|---|---|
Datameer now supports Amazon AWS EMR version 6.9.0. |
Bug Fixes
1 | Filesystem: Identify filesystem not being closed leaks | DAP-42559 |
---|---|---|
The s3A filesystem closure leaks have been detected and closed. Datameer now runs again as expected. | Bug fixed. |
2 | Workbook: Workbook navigation tab sticks forever when the Workbook has been deleted | DAP-42545 |
---|---|---|
The Workbook tab is now being closed after the Workbook has been deleted. | Bug fixed. |
3 | Performance: UI - Context menu behavior causes high ressource usage in Workbooks with many sheets | DAP-42543 |
---|---|---|
A sheet's context menu options can be used without any loss of performance. | Bug fixed. |
Datameer X 11.2.9
Bug Fixes
1 | Google Data Proc: Version 2.1 - Can not create External System for Hive | DAP-42529 |
---|---|---|
Creating an External System for Hive works as expected. | Bug fixed. |
2 | Workbook: 'WorkbookRestoreManager' lazy initialization failure | DAP-42536 |
---|---|---|
After restarting Datameer, opening a Workbook via the FileBrowser first initializes the 'WorkbookRestoreManager' again and one can open the Workbook directly by its URL again. | Bug fixed. |
3 | Authentication: 'Group Filters' section is not opening | DAP-42537 |
---|---|---|
The 'Group Filters' section on the Admin's configuration page when configuring external authentication can be opened and closed again. | Bug fixed. |
Datameer X 11.2.8
Improvements
1 | Supported Hadoop Distributions: Add support for Google DataProc 2.0.67 and 2.1.14 | DAP-42396 and DAP-42521 |
---|---|---|
Datameer now supports Google DataProc versions 2.0.67 and 2.1.14. |
2 | Plug-ins: plugin-snowflake: Use an existing Snowflake stage for export file staging | DAP-42505 |
---|---|---|
The plugin now provides the option to use a specific named staging location defined by a variable in the export as a specific storage option. |
Bug Fixes
1 | Workbook: Custom Workbook undo configuration is reverted to default after a Datameer restart | DAP-42520 |
---|---|---|
The custom 'Workbook Undo' configuration is kept after restarting Datameer. | Bug fixed. |
2 | Performance: Datameer became unavailable because of high memory and CPU consumption | DAP-42522 |
---|---|---|
After a patched Hadoop AWS file, Datameer works again without performance lost. | Bug fixed. |
3 | Export: Tableau - Export job setup using JSON Web Token authentication method causes Null Pointer Exception | DAP-42533 |
---|---|---|
The Tableau plugin has been patched. One can use the JWT authentication method again to execute export jobs or create new jobs using the JWT authentication method. | Bug fixed. |
Datameer X 11.2.7
Improvements
1 | Plug-ins: plugin-snowflake - use Snowflake's internal stage for exports | DAP-42491 |
---|---|---|
The plugin now provides the option to select Snowflake's internal stage as the default storage option. After writing the export data into the local file system, it will be imported to Snowflake via a 'PUT' command. |
Bug Fixes
1 | Timezone not resolving correctly in Mexico | DAP-42497 |
---|---|---|
After fixing code, the correct timezone is now displayed again for date and time assets in Mexico. | Bug fixed. |
2 | Workbook: Sorting by the 'Last Processed' time doesn't work | DAP-42501 |
---|---|---|
The sorting functionality now works again as expected in the File Browser's Artifacts bar. | Bug fixed. |
Datameer X 11.2.6
Improvements
1 | REST API: Output a response when creating a Workbook vie the API | DAP-42481 |
---|---|---|
REST API v2 calls for creating a new Workbook now return the following information: status, configuration ID, file ID and file UUID, and the. file path. |
2 | Import/ Export: Use the local Datameer timezone instead of UTC while converting numeric values into a date | DAP-42475 |
---|---|---|
Two new global properties ensure that Parquet int96/int64 codes timestamps are treated as being in Datameer's timezone now when set to 'false': 'das.import.parquet.int64.timestamp.adjusted-to-utc' and 'das.import.parquet.int96.timestamp.adjusted-to-utc'. |
3 | Supported Hadoop Distributions: Add support for Cloudera Private Cloud/ Cloudera Runtime 7.1.8 | DAP-42394 |
---|---|---|
Datameer now supports Cloudera Private Cloud/ Cloudera Runtime 7.1.8. |
Bug Fixes
1 | Administration: Housekeeping doesn't work - jobs stuck in the job scheduler | DAP-42485 & DAP-42489 |
---|---|---|
Housekeeping can now be configured via the properties 'housekeeping.keep-deleted-data' and 'housekeeping.keep-deleted-data-max' to remove outdated files. | Bug fixed. |
2 | REST API: Workbooks with schedule created via REST API v2 are not triggered | DAP-42484 |
---|---|---|
Scheduled triggering for Workbooks that are created via the REST API v2 is now working without a need to manually resave the Workbook's configuration in the User Interface. | Bug fixed. |
3 | Import/ Export: Updating Snowflake drivers | DAP-42479 |
---|---|---|
The patched plugin 'snowflake-jdbc' is now working to enable the Datameer - Snowflake connection again. | Bug fixed. |
Datameer X 11.2.5
Improvements
1 | Security: Upgrade Jetty to version 9.4.50 | DAP-42458 |
---|---|---|
Datameer is moved to the Jetty version 9.4.50 in order to ensure the services are not vulnerable to any known vulnerabilities. |
Bug Fixes
1 | Security: Penetration tests - GetFiles metadata, get the folder content via the folder ID | DAP-42468 |
---|---|---|
A user is not allowed to view any metadata for a directory without proper access (sees no folder path or file ID). | Bug fixed. |
2 | Security: Penetration tests - Any user may not see others schema log | DAP-42469 |
---|---|---|
A user is not allowed to see the parsing records from the artifacts created by other users. | Bug fixed. |
3 | Security: Penetration tests - The '/browser/list-file' endpoint returns metadata for any file in the system | DAP-42470 |
---|---|---|
A user is not allowed to view artifacts metadata without having at least a 'View' permission for these items. | Bug fixed. |
4 | Security: Penetration tests - Dependency graph information doesn't check the permission authorization | DAP-42471 |
---|---|---|
A user is not allowed to load an artifact’s dependency graph without having at least a 'View' permission for this item. | Bug fixed. |
5 | Security: Penetration tests - detailed error messages displayed | DAP-42472 |
---|---|---|
Only generic error messages without error details are now returned to Datameer users. Stack traces are logged server-side and only accessible by developers or administrators. The associated property handles the error messages. | Bug fixed. |
6 | Security: Penetration tests - '/rest/user-management/authenticable-users' should return list of usernames belonging to the same role only with CHANGE_FOLDER_OWNER capability | DAP-42473 |
---|---|---|
Only user who belong to the same role with the 'change folder owner' capability can view other user names from the REST API call. | Bug fixed. |
Datameer X 11.2.4
Improvements
1 | Drivers: Update Redshift Native JDBC driver to use Amazon's JDBC42 driver | DAP-42443 |
---|---|---|
Since the Amazon Redshift Native JDCB driver is deprecated. Therefore the the new drive should be updated to the Amazon Redshift JDBC42 driver. |
Bug Fixes
1 | Plugins: plugin-hbase+cdh-7.1.7.0 - Import fails | DAP-42440 |
---|---|---|
Setting the property 'hbase.server.sasl.provider.extras=org.apache.hadoop.hbase.security.provider.GssSaslClientAuthenticationProvider' bypasses the service loader lookup code with providing authentication providers. | Bug fixed. |
Datameer X 11.2.3
Improvements
1 | Importing Data: Add an option to configure the default value for "Records for Schema Detection" | DAP-42421 |
---|---|---|
The new custom property 'das.conductor.default.record.sample.size' controls what 'Records for schema detection' value is set by default whenever a user creates a new Import Job, Data Link or File Upload. It should be adjusted, e.g. when working with large JSON objects (should be decreased to 250 - 500). Once updated, it affects only newly created Import Jobs, the 'Records for schema detection' value for existing artifacts remains intact. |
Bug Fixes
1 | Export: Snowflake - Exporting to a new table leads to duplicated columns error intermittently | DAP-42012 |
---|---|---|
After fixing the plug-in, the export sheet can be executed without any errors again. | Bug fixed. |
2 | Export: Snowflake - Intermittent Export failure, insert value list does not match the column list | DAP-42442 |
---|---|---|
After a plug-in fix, the columns list now matches the insert value list as expected again. | Bug fixed. |
Datameer X 11.2.2
Improvements
1 | Supported Hadoop Distributions: Add support for Amazon EMR 6.8.0 | DAP-42400 |
---|---|---|
Datameer X now supports Amazon EMR version 6.7.0. |
2 | Properties: Make HadoopUtil#HADOOP_ACCESS_TIMEOUT configurable | DAP-42404 |
---|---|---|
The timeout in a S3A filesystem can be changed via the properties 'fs.s3a.connection.establish.timout' and 'fs.s3a.connection.timeout' now in order to prevent job timeouts. |
Bug Fixes
1 | Security: Spring framework CVE vulnerabilities | DAP-42402 |
---|---|---|
The user has been backported to Spring framework 4.3. | Bug fixed. |
Datameer X 11.2.1
Bug Fixes
1 | Import/ Export: Hive (with ADLS backed external table) - Failure on import | DAP-42389 |
---|---|---|
Hive import jobs now succeeds again when executed at the cluster after setting several properties and adapted Hadoop configuration. | Bug fixed. |
2 | Plug-ins: Plugin Parquet - Datameer encodes DATE values in base64 while exporting into a Parquet file | DAP-42382 |
---|---|---|
After the 'plugin-parquet' was patched, the DATE fields are exported correctly. | Bug fixed. |
3 | Export: Tableau - Hyper jobs fail intermittently with "Hyper Server did not call back on the callback connection." | DAP-42380 |
---|---|---|
Users are now able to export hyper formatted files to Tableau without any failures again after the associated plug-in has been updated. | Bug fixed. |
Datameer X 11.2.0
Improvements
1 | Supported Hadoop Distributions: Add support for Amazon EMR 5.36.0 | DAP-42369 |
---|---|---|
Datameer X now supports Amazon EMR version 5.36.0. |
2 | Supported Hadoop Distributions: Add support for Amazon EMR 6.7.0 | DAP-42375 |
---|---|---|
Datameer X now supports Amazon EMR version 6.7.0. |
3 | Backup & Restore: Performance and lack of logging problem | DAP-42370 |
---|---|---|
Performance issues during the backup and restore process are now executed at an expected speed for large number of artifacts. |
Bug Fixes
1 | Plug-ins: EMR - 'plugin-emr' High Availability discovery mode picks local running Yarn Ressource Manager at 0.0.0.0:8088 | DAP-42378 |
---|---|---|
The EMR cluster can now be set up in High Availability mode. | Bug fixed. |
2 | Plug-ins: EMR - 'plugin-emr' "default=obtain from ec2 environment" region option cannot be saved | DAP-42372 |
---|---|---|
When configuring the cluster mode in EMR, the default option for the region can now be saved. | Bug fixed. |
3 | Upgrade: Java upgrade 'Upgrade Filesystem Artifact To Delete' doesn't trigger the database upgrade | DAP-42 |
---|---|---|
The Java upgrade script now updates the table 'Upgrade Filesystem Artifact To Delete' as expected or throws a clear error message. | Bug fixed. |