...
- DFS block size is defaulted in Datameer X to 64MB
- Open the Datameer X application
- Click on the Administration tab at the top of the screen.
- Select Hadoop Cluster from the navigation bar on the left and Hadoop Cluster in the mode settings.
- In the custom properties box type in the new block size
dfs.block.size=[size]
The block size must be an integer and can't be a string value.
Example: (134217728 = 128mb)
Code Block |
---|
dfs.block.size=134217728 |
Memory and Task Calculation
...
Name | Value | Reasoning |
---|---|---|
| <masterhost>:9001 |
|
| <how many map task per task-tracker concurrently> |
|
| <how many reduce task per task-tracker concurrently> |
|
| -Xmx500m | Datameer X needs a minimum of 500 MB per task jvm with its default configuration. Interacts with |
| <comma-separated list of all folders where the big temporary data should go.> |
|
| <path in hdfs where small control files are stored> |
|
...
Name | Recommended Value | Description | Location |
---|---|---|---|
| false | Datameer X currently doesn't support speculative execution. However you don't need to configure these properties cluster-wide since Datameer X disables speculative execution for every job it submits. You only need to make sure that these properties aren't set cluster-wide to true with the final parameter set to true as well, not allowing a client to change that property on a job basis anymore. |
|
| usually one of: | Datameer X cares about the what and the when of compression but not about the how, means the codec. It uses the codec you configured for the cluster. If you've configured a non-standard codec like LZO, you have to make sure that Datameer X has access to the codec as well. See Frequently Asked Hadoop Questions#Q. How do I configure Datameer/Hadoop to use native compression? |
|
...