Housekeeping Service
Datameer's housekeeping is a background service that improves processing by deleting obsolete data from HDFS, removing old entries from job history, and removing unsaved workbooks.
The housekeeping service is defined either by count, where data objects above a given number are marked for deletion (e.g. keep 10 data objects, when object 11 comes in, delete object 1), by time (e.g. at the end of every business day), or with a combination of both. Data entities that are set to be deleted are first marked for deletion. Data entity deletion is based on specific site retention policy and can be configured by the Datameer administrator using the default.properties file.
A property that delays file deletion from HDFS for a period longer than the standard backup interval can be used to configure failover behavior. Similarly, a property that delays data deletion after a Datameer upgrade can be used to assure that roll back is successful if it is needed.
Housekeeping does delete
- Data in status "marked for deletion" on the Hadoop cluster after 30 minutes.Â
After the data status has been set as MARKED_FOR_DELETION, filesystem artifacts (e.g., job logs) are deleted in two steps. First, the database entry is deleted and a new database entry in the FilesystemArtifactToDelete
 table is created with the path to the HDFS object. The entry has the column's state set to WAITING_FOR_DELETION. Second, the housekeeping service collects all entries from the state WAITING_FOR_DELETION of the FilesystemArtifactToDelete
 table. The housekeeping service tries to delete these paths from the HDFS. If this succeeds, the entry is removed from the table.
- Data entities from database that are in status "deleted".
- Job history after 28 days.
- Unsaved workbooks after 3 days.
As of Datameer 6.3
If a deletion fails, the entry's column value is set to DELETION_FAILED.
Housekeeping does not delete
- Data not marked for deletion (configurable for failover and rollback reasons).
- Data in status "marked for deletion" that is used in a running job.
- Data referenced by an active workbook snapshot.Â
As of Datameer 6.3
In Datameer version 6.3 and above, data is copied instead of being referenced when a sheet is linked in a different workbook and is marked as kept.
Configuring Housekeeping
The housekeeping configuration settings can be found in the default.properties
 file in Datameer.
################################################################################################ ## Housekeeping configuration ################################################################################################ housekeeping.enabled=true (As of Datameer v6.3) # Define the maximum number of days job executions are saved in the job history, after a job has been completed. housekeeping.execution.max-age=28d # Maximum number of out-dated executions that should be deleted per housekeeping run housekeeping.run.delete.outdated-job-executions=50 # To allow for better failover due to a crashed database, deleted data should be kept longer than the configured # frequency of database backups. housekeeping.keep-deleted-data=2h # Maximum number of out-dated data objects that should be marked for deletion per housekeeping run housekeeping.run.mark-for-deletion.outdated-data-objects=200 # Maximum number of out-dated data objects that should be deleted per housekeeping run housekeeping.run.delete.outdated-data-objects=25 # Maximum number out-dated data artifacts that should be deleted from HDFS per housekeeping run housekeeping.run.delete.outdated-data-artifacts=100 # Define the maximum number of days unsaved workbooks are stored in the database. housekeeping.temporary-files.max-age=3d # Minimum time to keep files in temporary folder after last access. housekeeping.temporary-folder-files.max-age=30d # Maximum number of out-dated temporary conductor files that should be deleted per housekeeping run housekeeping.run.delete.outdated-temporary-files=50 # The time that the housekeeping service falls asleep after each cycle housekeeping.sleep-time=1h ################################################################################################
The default configuration works well for most enterprise environments and is set to control the volume of artifacts the housekeeping service touches during one transaction. The housekeeping service itself runs as long as there are objects to delete and performs multiple transactions as necessary. By default the service strives for the largest possible delete count. Unless you would like to keep artifacts longer or would like to change the amount of transactions per run, there is no need to change the default configuration.
Adjusting for failover protection
You can delay deleting files from HDFS for a period longer than the backup interval by adding this property in conf/live.properties
:
# To allow for better failover due to a crashed database, deleted data should be kept longer than the configured # frequency of database backups. housekeeping.keep-deleted-data=2h
For more information see Configuring a Server for Datameer Failover.
Adjusting for roll back
You can delay deleting files from HDFS following a Datameer product upgrade by adding this property in conf/live.properties
:
# Don't delete any data on HDFS for this period of time after an upgrade. This allows for a safe rollback to # a previous version housekeeping.keep-deleted-data-after-upgrade=2d
For more information see Preserving Your Environment and Data Prior to Upgrade.
Housekeeping Service Log Files
Housekeeping service information is stored separately from the conductor.log. The housekeeping log files can be found in <Datameer>/logs/housekeeping.log*
The default configuration creates a housekeeping services log up to a file size of 1MB. After 1MB a new log file is created, up to 10 times for a total of 11 housekeeping files. After that, the next log file replaces the oldest file, in a repeating cycle. All existing files in <Datameer>/logs/housekeeping.log*
 should be reviewed for a complete understanding of housekeeping status. See Log4j for more information and how to change the default behavior of the file appender. The log contains the following entries:
Service run start
[system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:155) - Starting housekeeping
...
Example: Service run
... [system] INFO [<timestamp>] [HousekeepingService thread-1] (JobExecutionService.java:461) - Deleted 45 executions: <id>, <id>, ... , <id> ... [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:410) - Deleting artifact temp/job-<jobExecutionID>. [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:413) - Deleting <fs>:///<path>/job-<jobExecutionID> ...
Info messages
Artifacts that housekeeping hasn't yet deleted are logged in the following way:
... [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:447) - Not deleting WorkbookData[id=<configID>,status=MARKED_FOR_DELETION], because it is at least referenced in the kept sheet '<sheetName>' of workbook '/<DatameerPath>/<workbookName>.wbk'. [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:447) - Not deleting DataSourceData[id=<ID>,status=MARKED_FOR_DELETION], because it is at least referenced in the kept sheet '<sheetName>' of workbook '/<DatameerPath>/<workbookName>.wbk'. ...
Based on this information, you might review the data retention settings of referenced workbooks and worksheets.
Example: Clean up
... [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:358) - Setting data status of (WorkbookData:<dataID>) to DELETED [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:362) - Deleting <fs>:/<path>/workbooks/<configID>/<jobID> [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:358) - Setting data status of (DataSourceData:<dataID>) to DELETED [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:362) - Deleting <fs>:/<path>/importlinks/<configID>/<jobID> ... [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:109) - Next check for physical data to delete will start at offset 109. [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:499) - Deleted data WorkbookData[id=<dataID>,status=DELETED] created by DapJobExecution{id=<jobID>, type=NORMAL, status=COMPLETED} [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:499) - Deleted data DataSourceData[id=<dataID>,status=DELETED] created by DapJobExecution{id=<jobID>, type=NORMAL, status=COMPLETED} ...
Example: Summary after a successful run
... [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:116) - Next check for physical data to delete will start at the beginning again. [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep physically deleted data entities: 1.04% (62ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep outdated job executions : 49.92% (2986ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep obsolete artifacts from cluster : 0.69% (41ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep outdated temporary datameer files: 0.02% (1ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep outdated temporary HDFS folder and files: 0.20% (12ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:225) - Housekeep physical data : 48.01% (2872ms) [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:228) - Business: 0.2% [system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:205) - Housekeeping done. took 5 sec