Friday, December 27, 2013

HDFS, Data loss and corruption

Often we get questions how HDFS protects data and what the mechanisms are to prevent data corruption. Eric Sammer explain this en detail in Hadoop Operations.

Additional to the points below, you can also have a second cluster to sync the files, simply to prevent human being failures, like deleting a subset of data. If you have enough space in your cluster, enabling the trash per core-site.xml and setting to a higher value then a day helps too.

<property>
<name>fs.trash.interval</name>
<value>1440</value>
<description>Number of minutes after which the checkpoint
gets deleted. If zero, the trash feature is disabled. 1440 means 1 day
</description>
</property>
<property>

<name>fs.trash.checkpoint.interval</name>
<value>15</value>
<description>Number of minutes between trash checkpoints.
Should be smaller or equal to fs.trash.interval.
Every time the checkpointer runs it creates a new checkpoint
out of current and removes checkpoints created more than
fs.trash.interval minutes ago.
</description>
</property>


HDFS is designed to protect data in different ways to minimize the risk of data loss with a valuable write speed. This enables in some circumstances HDFS as a NAS replacement for large files with the possibility to quickly access the stored data. The illustration below simplify the data flow:
 

HDFS has per default the following mechanisms implemented:
  1. Data written to files in HDFS is split up into chunks (usually 128MB in size). Each chunk (called a block) is replicated three times, by default, to three different machines. Multiple copies of a block are never written to the same machine. Replication level is configurable per file. 
  2. HDFS actively monitors the number of available replicas of each block, compared to the intended replication level. If, for some reason, a disk or node in the cluster should become unavailable, the filesystem will repair the missing block(s) by creating new replicas from the remaining copies of the data.
  3. HDFS can be (and normally is) configured to place block replicas across multiple racks of machines to protect against catastrophic failure of an entire rack or its constituent network infrastructure. This is called RackAwareness and should reflect your topology.
  4. Each block has an associated checksum computed on write, which is verified on all subsequent reads. Additionally, to protect against "bit rot" of files (and their blocks) that are not regularly read, the filesystem automatically verifies all checksums of all blocks on a regular basis. Should any checksum not verify correctly, HDFS will automatically detect this, discard the bad block, and create a new replica of the block.
  5. Filesystem metadata - information about object ownership, permissions, replication level, path, and so on - is served by a highly available pair of machines (i.e. namenodes) in CDH4. Updates to metadata are maintained in a traditional write-ahead transaction log that guarantees durability of changes to metadata information. The transaction log can be written to multiple physical disks and, in a highly available configuration, is written to multiple machines.
  6. HDFS block replicas are written in a synchronous, in-line replication pipeline. That is, when a client application receives a successful response from the cluster that a write was successful, it is true that at least a configurable minimum number of replicas are also complete. This eliminates the potential failure case of asynchronous replication where a client could complete a write to a node, receive a successful response, only for that one node to fail before it's able to replicate to another node.
  7. HDFS is fully instrumented with metric collection and reporting so monitoring systems (such as Cloudera Manager) can generate alerts when faults are detected. Metrics related to data integrity include unresponsive nodes in the cluster, failed disks, missing blocks, corrupt blocks, under-replicated blocks, and so on. Cloudera Manager has extensive HDFS-specific monitoring configured out of the box.
  8. HDFS supports directory-level filesystem quotas to protect against accidental denial of service attacks that could otherwise cause critical applications to fail to write data to the cluster.
  9. All higher level data storage and processing systems in CDH (MapReduce, HBase, Hive, Pig, Impala) use HDFS as their underlying storage substrate and, as a result, have the same data protection guarantees described above.

Wednesday, November 13, 2013

Export more as 10k records with Sqoop

If you want to export more as 10k records out of a RDBMS via sqoop, you have some settings and properties you can tweak.

Parameter --num-mapper
Number of simultaneous connections that will be opened against database. Sqoop will use that many processes to export data (each process will export slice of the data). Here you have to take care about the max open connections to your RDBMS, since this can overwhelm the RDBMS easily.

Parameter --batch
Enabling batch mode on the JDBC driver. Here you use the JDBC batching mode, which queues the queries and deliver batched results

Property sqoop.export.records.per.statement
Number of rows that will be created for single insert statement, e.g. INSERT INTO xxx VALUES (), (), (), ...
Here you have to know which VALUES you want to catch, but this

Property export.statements.per.transaction
Number of insert statements per single transaction. e.g BEGIN; INSERT, INSERT, .... COMMIT

You can specify the properties (even both at the same time) in the HADOOP_ARGS section of the command line, for example: 
sqoop export -Dsqoop.export.records.per.statement=X --connect ...

Monday, September 2, 2013

Connect to HiveServer2 with a kerberized JDBC client (Squirrel)

Squirrel work with kerberos, however, if you don't want kerberos then you don't need the JAVA_OPTS changes at the end. My colleague, Chris Conner, has created a maven project that pulls down all of the dependencies for a JDBC program:

https://github.com/cmconner156/hiveserver2-jdbc-kerberos

Note for kerberos environment, you need to kinit before using Squirrel. The above program handles kinit for you. If you are not using Kerberos and you want to use the above program, then comment out the following lines:

System.setProperty("java.security.auth.login.config","gss-jaas.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
System.setProperty("java.security.krb5.conf","krb5.conf");


Then make sure to change the jdbc URI to not have the principal. Also, it's worth mentioning that if you use kerberos, I did have some issues with differing java versions. So try matching your client's java version with the HS2 server.

Work with Squirrel

First create a new Driver:
  1. Click on Drivers on the side.
  2. Click the + button.
  3. Enter a Name.
  4. Enter the URL like the example: jdbc:hive2://<host>:<port>/<db>;principal=<princ>
  5. Enter the Driver name: org.apache.hive.jdbc.HiveDriver
Click on the Extra Class Path button and click Add and make sure to add the following Classes:

commons-configuration-1.6.jar
commons-logging-1.0.4.jar
guava-11.0.2.jar
hadoop-auth-2.0.0-cdh4.2.0.jar
hadoop-common-2.0.0-cdh4.2.0.jar
hadoop-core-2.0.0-mr1-cdh4.2.0.jar
hive-exec-0.10.0-cdh4.2.0.jar
hive-jdbc-0.10.0-cdh4.2.0.jar
hive-metastore-0.10.0-cdh4.2.0.jar
hive-service-0.10.0-cdh4.2.0.jar
hive-shims-0.10.0-cdh4.2.0.jar
libfb303-0.9.0.jar
libthrift-0.9.0.jar
log4j-1.2.16.jar
slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.1.jar

Note, the classes can be changed every release, so please find out the one you have installed.
Click OK to save.

Now you need to edit the Squirrel start script. On OSX, as example, it is "/Applications/SQuirreLSQL.app/Contents/MacOS/squirrel-sql.sh", Linux like OS' should have this in /etc/squirrel - or elsewhere.

Now add the following line anywhere in the script above the actual JAVA_CMD line. Make sure to enter the correct Kerberos stuff:
export JAVA_OPTS="-Djava.security.krb5.realm=ALO.ALT -Djava.security.krb5.kdc=hadoop1.alo.alt"

Now edit the last line of that script, it is normally something like:
$JAVACMD -Xmx256m -cp "$CP" $MACOSX_SQUIRREL_PROPS -splash:"$SQUIRREL_SQL_HOME/icons/splash.jpg" net.sourceforge.squirrel_sql.client.Main --log-config-file "$UNIX_STYLE_HOME"/log4j.properties --squirrel-home "$UNIX_STYLE_HOME" $NATIVE_LAF_PROP $SCRIPT_ARGS

Change it to:

$JAVACMD -Xmx256m $JAVA_OPTS -cp "$CP" $MACOSX_SQUIRREL_PROPS -splash:"$SQUIRREL_SQL_HOME/icons/splash.jpg" net.sourceforge.squirrel_sql.client.Main --log-config-file "$UNIX_STYLE_HOME"/log4j.properties --squirrel-home "$UNIX_STYLE_HOME" $NATIVE_LAF_PROP $SCRIPT_ARGS

Notice I added the JAVA_OPTS.

Now you can add a new host and it should work correctly with kerberos. 

Tuesday, July 23, 2013

Enable Replication in HBase


HBase does have support for multi-site replication for disaster recovery, it is not a HA solution, the application and solution architecture will need to implement HA. This means that data from one cluster is automatically replicated to a backup cluster, this can within the same data center or across data centers. There are 3 ways to configure this, master-slave, master-master, and cyclic replication. Master slave is the simplest solution for DR as data is written to the master and replicated to the configured slave(s). Master-Master means that the two clusters cross replicate edits, however have means to prevent replication going into an infinite loop by tracking mutations using the HBase cluster ID. Cyclic replication is supported which means you can have multiple clusters replicating to each other these can be in combinations of master-master, master-slave.
Replication relies on the WAL, the WAL edits are replayed from a source region server to a target region server.

A few important points:
1) Version alignment, in a replicated environment the versions of HBase and Hadoop/HDFS must be aligned, this means you must replicate to the same version of HBase/HDFS (> 0.92)
2) Don't use HBase for Zookeeper deployment, deploy separate Zookeeper Quorums for each cluster that are user managed.
3) You need full network connectivity between all nodes in all clusters, that is Node 1 on cluster A must be able to reach Node 2 on Cluster B and so on, this applies to the Zookeeper clusters as well.
4) All tables and their corresponding column families must exist on every cluster within the replication configuration, these must be named identically.
5) Don't enable HLog compression, this will cause replication to fail.

6) Do not use start_replication nor stop_replication, this will cause data loss on the replicated side

Getting Started

Step 1:
To enable replication all clusters in the replication configuration must add the following to their configuration:
property= hbase.replication
value = true


Step 2:

Launch the hbase shell and set the replication:
hbase(main):001:0> alter 'your_table', {NAME => 'family_name', REPLICATION_SCOPE => '1'}

Next add a peer:
hbase(main):002:0> add_peer '1', "zk1,zk4:2181:/hbase-backup

You can list the currently enabled peers by 
hbase(main):003:0> list_peers 

There are some other considerations too, in your slave peers, it would be worth increasing the TTL on the tables, this means that accidental or malicious deletes can be recovered, you can also increase the min versions property so that more history is retained on the slaves to cover more scenarios. As mentioned this is not a cross site HA solution, there is no automated cut over, this means there is work at the application and operational level to facilitate this. To disable the replication, simply use:
hbase(main):004:0> disable_peer("1")

Note, disabling enabling a table at the source cluster will not affect the current replication, it will start from the 0 offset and replicate any entry which is scoped for replication if present in it. To move into a fresh state, you have to roll the logs on the source cluster. This means, after you have removed the peer, you have to force a manual file roll per hbase shell:
hbase(main):009:0> hlog_roll 'localhost,60020,1365483729051'

It takes servername as an argument. You can get the regionservers name from the znode (/hbase/rs).
When you now re-enable the peer, the replication starts with a fresh state.

Links:

Monday, May 27, 2013

Get all extended Hive tables with location in HDFS

for file in $(hive -e "show table extended like \`*\`" | grep location: | awk 'BEGIN { FS = ":" };{printf("hdfs:%s:%s\n",$3,$4)}'); do hdfs dfs -du -h $file; done;
Output:

Time taken: 2.494 seconds
12.6m  hdfs://hadoop1:8020/hive/tpcds/customer/customer.dat
5.2m  hdfs://hadoop1:8020/hive/tpcds/customer_address/customer_address.dat
76.9m  hdfs://hadoop1:8020/hive/tpcds/customer_demographics/customer_demographics.dat
9.8m  hdfs://hadoop1:8020/hive/tpcds/date_dim/date_dim.dat
148.1k  hdfs://hadoop1:8020/hive/tpcds/household_demographics/household_demographics.dat
4.8m  hdfs://hadoop1:8020/hive/tpcds/item/item.dat
36.4k  hdfs://hadoop1:8020/hive/tpcds/promotion/promotion.dat
3.1k  hdfs://hadoop1:8020/hive/tpcds/store/store.dat
370.5m  hdfs://hadoop1:8020/hive/tpcds/store_sales/store_sales.dat
4.9m  hdfs://hadoop1:8020/hive/tpcds/time_dim/time_dim.dat
0      hdfs://hadoop1:8020/user/alexander/transactions/_SUCCESS
95.0k  hdfs://hadoop1:8020/user/alexander/transactions/_logs
3.1m   hdfs://hadoop1:8020/user/alexander/transactions/part-m-00000
3.1m   hdfs://hadoop1:8020/user/alexander/transactions/part-m-00001
3.1m   hdfs://hadoop1:8020/user/alexander/transactions/part-m-00002
3.1m   hdfs://hadoop1:8020/user/alexander/transactions/part-m-00003
1.9m  hdfs://hadoop1:8020/user/hive/warehouse/zipcode_incomes_plain/DEC_00_SF3_P077_with_ann_noheader.csv