Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Datameer's housekeeping runs in the background of the software and keeps it running smoothly by deleting unnecessary data. The service consists of different operations like the deletion of physical data on the cluster or the deletion of data entities from the database. All data entities that are set to be deleted have to get a mark of deletion first. When a data entity is getting deleted, is based on your individual retention policy. This is either defined by count is a background service that improves processing by deleting obsolete data from HDFS, removing old entries from job history, and removing unsaved workbooks.

The housekeeping service is defined either by count, where data objects above a given number are marked for deletion (e.g., keep  keep 10 data objects, if number when object 11 comes in, delete the first oneobject 1), by time (e.g. after  at the end of every business day), or with a combination of both.

Table of Contents

What Does Datameer's Housekeeping Service Do?

...

. Data entities that are set to be deleted are first marked for deletion. Data entity deletion is based on specific site retention policy and can be configured by the Datameer administrator using the default.properties file.

A property that delays file deletion from HDFS for a period longer than the standard backup interval can be used to configure failover behavior. Similarly, a property that delays data deletion after a Datameer upgrade can be used to assure that roll back is successful if it is needed.

Table of Contents

Anchor
housekeeping_delete
housekeeping_delete

...

Housekeeping does delete

  • Data in status "marked for deletion" on the Hadoop cluster after 30 minutes. 

After the data status has been set as MARKED_FOR_DELETION, filesystem artifacts (e.g., job logs) are deleted in two steps. First, the database entry is deleted and a new database entry in the FilesystemArtifactToDelete table is created with the path to the HDFS object. The entry has the column's state set to WAITING_FOR_DELETION. Second, the housekeeping service collects all entries from the state WAITING_FOR_DELETION of the FilesystemArtifactToDelete table. The housekeeping service tries to delete these paths from the HDFS. If this succeeds, the entry is removed from the table.

...

titleAs of Datameer 6.3

...

.

  • Data entities from database that are in status "deleted".
  • Job history after 28 days.
  • Unsaved workbooks after 3 days.
Infonote
title

All of these can be configured by the administrator to best fit their individual needs in the default properties.

Housekeeping doesn't delete

...

As of Datameer 6.3

If a deletion fails, the entry's column value is set to DELETION_FAILED.

Housekeeping does not delete

  • Data not marked for deletion . (Configurable configurable for failover and rollback reasons).
  • Data in status "marked for deletion" that is used in a running job.
  • Data referenced by an active workbook snapshot. 

...

Configuring Housekeeping

The Housekeeping housekeeping configuration settings can be found in the the default.properties file  file in Datameer.

Code Block
languagebash
################################################################################################
## Housekeeping configuration
################################################################################################
housekeeping.enabled=true (As of Datameer v6.3)
# Define the  maximum number of days job executions are saved in the job history, after a job has been completed.
housekeeping.execution.max-age=28d
# Maximum number of out-dated executions that should be deleted per housekeeping run
housekeeping.run.delete.outdated-job-executions=50
# To allow for better failover due to a crashed database, deleted data should be kept longer than the configured
# frequency of database backups.
housekeeping.keep-deleted-data=30m2h
# Maximum number of out-dated data objects that should be marked for deletion per housekeeping run
housekeeping.run.mark-for-deletion.outdated-data-objects=200
# Maximum number of out-dated data objects that should be deleted per housekeeping run
housekeeping.run.delete.outdated-data-objects=25
# Maximum number out-dated data artifacts that should be deleted from HDFS per housekeeping run
housekeeping.run.delete.outdated-data-artifacts=100
# Define the maximum number of days unsaved workbooks are stored in the database.
housekeeping.temporary-files.max-age=3d
# Minimum time to keep files in temporary folder after last access.
housekeeping.temporary-folder-files.max-age=30d
# Maximum number of out-dated temporary conductor files that should be deleted per housekeeping run
housekeeping.run.delete.outdated-temporary-files=50
# The time that the housekeeping service falls asleep after each cycle
housekeeping.sleep-time=1h
################################################################################################

The default configuration works well for most enterprise environments and is set to control the amount volume of artifacts which HousekeepingService the housekeeping service touches during one transaction. The Housekeeping housekeeping service itself runs as long as there are objects to delete and performs multiple transactions as necessary. Per By default , the service strives for the largest possible delete count. Unless you would like to keep artifacts longer or would like to change the amount of transactions per run, there is no need to change the default configuration.

Anchor
failover_prop
failover_prop
Adjusting for failover protection

Delay You can delay deleting files from HDFS for a period longer than this the backup interval , by adding this property in conf/live.properties:

Code Block
languagebash
# To allow for better failover due to a crashed database, deleted data should be kept longer than the configured
# frequency of database backups.
housekeeping.keep-deleted-data=30m2h

Read For more information on Configuring see Configuring a Server for Datameer Failover.

Anchor
update_rollback
update_rollback
Adjusting for roll back

You can delay deleting files from HDFS following a Datameer product upgrade by adding this property in conf/live.properties:

Code Block
languagebash
# Don't delete any data on HDFS for this period of time after an upgrade. This allows for a safe rollback to
# a previous version 
housekeeping.keep-deleted-data-after-upgrade=2d

For more information see Preserving Your Environment and Data Prior to Upgrade.

Anchor
housekeeping_log
housekeeping_log

...

Housekeeping Service Log Files

Housekeeping service information is stored separately from the conductor.log. The housekeeping log files can be found in <Datameer>/logs/housekeeping.log*

The default configuration logs the Housekeeping creates a housekeeping services log up to a file size of 1MB. After 1MB , a new log file is created, up to 10 times for a total of 11 Housekeeping housekeeping files. New files are then replace After that, the next log file replaces the oldest file, in a repeating cycle. All existing files in <Datameer>/logs/housekeeping.log* need to  should be reviewed for full Housekeeping knowledge.a complete understanding of housekeeping status. See Log4j for more information and how to change the default behavior of the file appender 

Reading Housekeeping Service Log Files

appender. The log contains the following entries:

Service run start


Panel

[system] INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:155) - Starting housekeeping
...

Example: Service run

Panel
...
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (JobExecutionService.java:461) - Deleted 45 executions: <id>, <id>, ... , <id> ...
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:410) - Deleting artifact temp/job-<jobExecutionID>.
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:413) - Deleting <fs>:///<path>/job-<jobExecutionID>
...

Info messages

Artifacts which Housekeeping doesnthat housekeeping hasn't delete yet are yet deleted are logged in the following way:

Panel
...
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:447) - Not deleting WorkbookData[id=<configID>,status=MARKED_FOR_DELETION], because it is at least referenced in the kept sheet '<sheetName>' of workbook '/<DatameerPath>/<workbookName>.wbk'.
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:447) - Not deleting DataSourceData[id=<ID>,status=MARKED_FOR_DELETION], because it is at least referenced in the kept sheet '<sheetName>' of workbook '/<DatameerPath>/<workbookName>.wbk'.
...

Based on this information, you might review the data retention settings of referenced workbooks and worksheets.

Example: Clean up

Panel
...
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:358) - Setting data status of (WorkbookData:<dataID>) to DELETED
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:362) - Deleting <fs>:/<path>/workbooks/<configID>/<jobID>
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:358) - Setting data status of (DataSourceData:<dataID>) to DELETED
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:362) - Deleting <fs>:/<path>/importlinks/<configID>/<jobID>
...
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:109) - Next check for physical data to delete will start at offset 109.
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:499) - Deleted data WorkbookData[id=<dataID>,status=DELETED] created by DapJobExecution{id=<jobID>, type=NORMAL, status=COMPLETED}
[system]  INFO [<timestamp>] [HousekeepingService thread-1] (HousekeepingService.java:499) - Deleted data DataSourceData[id=<dataID>,status=DELETED] created by DapJobExecution{id=<jobID>, type=NORMAL, status=COMPLETED}
...

...