My name is Farooq and I am with HDinsight support team here at Microsoft. In this blog I will try to give some brief overview of Sqoop in HDinsight and then use an example of importing data from a Windows Azure SQL Database table to HDInsight cluster to demonstrate how you can get stated with Sqoop in HDInsight.

What is Sqoop?

Sqoop is an Apache project and part of Hadoop ecosystem. It allows data transfer between Hadoop\HDInsight cluster and relational databases such as SQL, Oracle, MySQL etc. Sqoop is a collection of related tools, for example import, export, list-all-tables, list-databases etc. To use Sqoop, you specify the tool you want to use and the arguments that control the tool. For more information on Sqoop please check Sqoop User Guide.

When do you need to use Sqoop?

You need to use Sqoop only when you are trying to import/export data between Hadoop and a relational Database. HDInsight provides a full-featured Hadoop Distributed File System (HDFS) over Windows Azure Blob storage (WABS) and if you want to upload data to HDInsight or WASB from any other source, for example from your local computer's file system then you should use any of the tools discussed in this article. The same article also discusses how to import data to HDFS from SQL Database/SQL Server using Sqoop. In this blog I will elaborate on the same with an example and try to provide more details information along the way.

What do I need to do for Sqoop to work in my HDInsight cluster?

HDInsight 2.1 includes Sqoop 1.4.3. The Microsoft SQL Server SQOOP Connector for Hadoop is now part of Apache SQOOP 1.4. So you do not need to install the connector separately. All HDInsight clusters also have Microsoft SQL Server JDBC driver installed; so all components that are needed to transfer data between HDInsight cluster and SQL server are already installed in a HDI cluster and you do not have to install anything.

How can I run a Sqoop job?

With HDInsight preview version we could only run the Sqoop commands from Hadoop command line after doing a remote desktop session (RDP) on the HDInsight cluster head node. However the release version of HDInsight SDK includes the PowerShell cmdlet to run Sqoop job remotely. So we can

  1. Run Sqoop jobs locally from HDInsight head node using Hadoop Command Line
  2. Run Sqoop job remotely using HDInsight SDK PowerShell cmlets

We recommend that you run your Sqoop commands remotely using HDInsight SDK cmdlets . We will discuss both the options in detail. First let's see how we can run Sqoop jobs locally from HDInsight head node using Hadoop Command Line.

Run Sqoop jobs locally from HDInsight head node using Hadoop Command Line

I am assuming you already have a Windows Azure SQL Database. If you don't and you want to get one please follow the steps in this article. Let's follow the steps below to create a test table and populate with some sample data in your Windows Azure SQL Database which we will import in our HDInsight cluster shortly. I will show how to do this from Windows Azure portal but you can also connect to the Windows Azure SQL Database from SSMS and do the same.

Note: if you want to transfer data from a SQL server on your environment instead then you need to change the Sqoop command with appropriate connection information and it should be very similar to the connection string I have provided later in this blog under 'More sample Sqoop commands' section for SQL server on Window Azure VM.

  1. Login to your Windows Azure Portal and select 'SQL Databases' from the Left and click 'Manage' at the bottom.

  2. Provider your Windows Azure SQL Database user ID and password to login and then click 'New Query' to open a new query window to run T-SQL queries.

  3. Copy paste the following T-SQL query and execute to create a test table Table1.

    CREATE TABLE [dbo].[Table1](

    [ID] [int] NOT NULL,

    [FName] [nvarchar](50) NOT NULL,

    [LName] [nvarchar](50) NOT NULL,

    CONSTRAINT [PK_Table_4] PRIMARY KEY CLUSTERED

    (

    [ID] ASC

    )

    ) ON [PRIMARY]

    GO

  4. Run the Following to Populate Table1 with 4 rows.

    INSERT INTO [dbo].[Table1] VALUES (1,'Jhon','Doe'), (2,'Harry','Hoe'), (3, 'Carla','Coe'), (4,'Jackie','Joe');

    GO

  5. Now finally run the following T-SQL to make sure that is table is populated with the sample data. You should see the output as below.

    SELECT * from [dbo].[Table1]

Now let's follow the steps below to Import the rows in Table1 to the HDInsight Cluster.

  1. Login to your HDInsight cluster head node via Remote Desktop (RDP) and double click the 'Hadoop Command Line' icon in the desktop to open Hadoop Command Line. RDP access is turned off by default but you can follow the steps inthis blog to enable RDP and then RDP to the head node of your HDInsight cluster.
  2. In Hadoop Command Line please navigate to the "C:\apps\dist\sqoop-1.4.3.1.3.1.0-06\bin" folder.

    Note: Please verify the path for the Sqoop bin folder in your environment. It may slightly vary from version to version.

  3. Run the following Sqoop command to import all the rows of table "Table1" from  Windows Azure SQL Database "mfarooqSQLDB" to HDInsight Cluster.

    sqoop.cmd import –-connect "jdbc:sqlserver://<SQLDatabaseServerName>.database.windows.net:1433;username=<SQLDatabasUsername>@<SQLDatabaseServerName>;password=<SQLDatabasePassword>;database=<SQLDatabaseDatabaseName>" --table Table1 --target-dir /user/hdp/SqoopImportTable1

    Once the command is executed successfully you should see something similar as below in Hadoop Command Line window.

  4. There are quite a number of tools available to upload/download and view data in WASB. Let's use Azure Storage Explorer tool. You need to install the tool in your work station and configure for your cluster. Once all is done open the tool and find out /user/hdp/SqoopImportTable1 folder. You should see something similar as below. It shows 4 files indicating 4 map jobs were used. You can select a file and click the 'View' button to see the actual text data.

Now let's export the same rows back to the SQL server from HDInsight cluster. Please use a different table with the same schema as 'Table1'. Otherwise you would get a Primary Key violation error since the rows already exist in 'Table1'.

  1. Create an empty table 'Table2' with the same schema as 'Table1'.

    CREATE TABLE [dbo].[Table2](

    [ID] [int] NOT NULL,

    [FName] [nvarchar](50) NOT NULL,

    [LName] [nvarchar](50) NOT NULL,

    CONSTRAINT [PK_Table_2] PRIMARY KEY CLUSTERED

    (

    [ID] ASC

    )

    ) ON [PRIMARY]

    GO

  2. Run the following Sqoop command from Hadoop Command Line.

sqoop.cmd export --connect "jdbc:sqlserver://<SQLDatabaseServerName>.database.windows.net:1433;username=<SQLDatabasUsername>@<SQLDatabaseServerName>;password=<SQLDatabasePassword>;database=<SQLDatabaseDatabaseName>" --table Table2 --export-dir /user/hdp/SqoopImportTable1 --input-fields-terminated-by ","

More sample Sqoop commands:

Import from a SQL server on Window Azure VM:

sqoop.cmd import --connect "jdbc:sqlserver:// <WindowsAzureVMServerName>.cloudapp.net:1433; username=<SQLServerUserName>; password=<SQLServerPassword>; database=<SQLServerDatabaseName>" --table Table_1 --target-dir /user/hdp/SqoopImportTable

Export to a SQL server on Window Azure VM:

sqoop.cmd export --connect "jdbc:sqlserver://<WindowsAzureVMServerName>.cloudapp.net:1433; username=<SQLServerUserName>; password=<SQLServerPassword>; database=<SQLServerDatabaseName>" --table Table_2 --export-dir /user/hdp/SqoopImportTable2 --input-fields-terminated-by ","

Importing to HIVE from Windows Azure SQL Database:

C:\apps\dist\sqoop-1.4.2\bin>sqoop.cmd import –connect "jdbc:sqlserver://<WindowsAzureVMServerName>.cloudapp.net:1433; username=<SQLServerUserName>; password=<SQLServerPassword>; database=<SQLServerDatabaseName>" --table Table1 --hive-import

Note: This will store the files under hive/warehouse/TableName folder in HDFS (For example hive/warehouse/table1/part-m-00000 )

Run Sqoop job remotely using HDInsight SDK PowerShell cmlets

To use HDInsight PowerShell tools you need to install Windows Azure PowerShell tools first and then install HDInsight PowerShell tools. Then you need to prepare your workstation to use the HDInsight SDK. Please follow the detail steps in this earlier blog post to install the tools and prepare your work station to use the HDInsight SDK.

Once you have installed and configured Windows Azure PowerShell tools and HDInsight SDK running a Sqoop job is very easy. Please follow the steps below to import all the rows of table "Table2" from Windows Azure SQL Database "mfarooqSQLDB" to HDInsight Cluster.

  1. Open the Windows azure PowerShell console on the workstation and run the following cmdlets one at a time.

    Note: You can also use Windows Powershell ISE to type the code and run all at once. Powershell ISE makes edits easy and you can open the tool from "C:\Windows\System32\WindowsPowerShell\v1.0\powershell_ise.exe".

  2. Set the variables for your Windows Azure Subscription name and the HDInsight cluster name.

    $subscriptionName = "<WindowsAzureSubscriptionName>"

    $clusterName = "<HDInsightClusterName>"

    Select-AzureSubscription $subscriptionName

    Use-AzureHDInsightCluster $clusterName -Subscription $subscriptionName

  3. Define the Sqoop job that we want to run. In this exercise we will import all the rows of table "Table2" that we created earlier in Windows Azure SQL Database.

    $sqoop = New-AzureHDInsightSqoopJobDefinition -Command "import --connect jdbc:sqlserver://<SQLDatabaseServerName>.database.windows.net:1433;username=<SQLDatabasUsername>@<SQLDatabaseServerName>; password=<SQLDatabasePassword>; database=<SQLDatabaseDatabaseName> --table Table2 --target-dir /user/hdp/SqoopImportTable8"

  4. Run the Sqoop job that we just defined.

    $sqoopJob = Start-AzureHDInsightJob -Subscription $subscriptionName -Cluster $clusterName -JobDefinition $sqoop

  5. Run the following to wait for the completion or failure of the HDInsight job and show its progress.

    Wait-AzureHDInsightJob -Subscription $subscriptionName -WaitTimeoutInSeconds 3600 -Job $sqoopJob

  6. Run the following to retrieve the log output for a job from the storage account associated with a specified cluster.

    Get-AzureHDInsightJobOutput -Cluster $clusterName -Subscription $subscriptionName -StandardError -JobId $sqoopJob.JobId

If the Sqoop job completes successfully you should see something similar as below in your Windows Azure PowerShell command line window.

Troubleshooting tips

When you run a Sqoop job command it runs MapReduce job in Hadoop Cluster (map only and no reduce task). You can specify the number of map tasks but by default four tasks are used. There is no separate log file specific to Sqoop. So we need to troubleshoot Sqoop job failure or performance issues as any other MapReduce job failure or performance issues and start by checking the task logs.  I plan to write more on how to troubleshot Sqoop issues by focusing on some specific scenarios in the near future.

That's all for today and I hope you found this blog useful. I look forward to your comments and suggestions J.

Azure 云平台用 SQOOP 将 SQL server 2012 数据表导入 HIVE / HBASE的更多相关文章

  1. SQL Server 2012 - 数据表的操作

     unicode:双字节编码      variable:可变的    character:字符 T-SQL:  Transact Structured Query Language unique:唯 ...

  2. 不同版本的SQL Server之间数据导出导入的方法及性能比较

    原文:不同版本的SQL Server之间数据导出导入的方法及性能比较 工作中有段时间常常涉及到不同版本的数据库间导出导入数据的问题,索性整理一下,并简单比较下性能,有所遗漏的方法也欢迎讨论.补充. 0 ...

  3. sql server 2012 数据引擎任务调度算法解析(下)

    上次我们说到,sql server 2012的企业版的任务调度流程,一直到给新连接分配了scheduler,都是与以前的版本算法是一致的,只有在进行任务分配的时候,算法才有了细微的调整. 新算法的目的 ...

  4. sql server 2012 数据引擎任务调度算法解析(上)

    微软在sql server 2012版本之后,引入了新的任务调度算法,这个算法与之前的版本有一些细微的差别.我在这里试着简单描述一下,一些基本概念就不再赘述了,比如NUMA.scheduler.wor ...

  5. SQL Server 2012数据导入SQL Server 2008

    SQL Server 2012可以降级到2008吗?没有找到方法,似乎也不支持.整理了一个变通的方法,把2012的数据和结构导出,然后再导入2008. 在 SQL Server 2012 使用 Sql ...

  6. MS SQL Server中数据表、视图、函数/方法、存储过程是否存在判断及创建

    前言 在操作数据库的时候经常会用到判断数据表.视图.函数/方法.存储过程是否存在,若存在,则需要删除后再重新创建.以下是MS SQL Server中的示例代码. 数据表(Table) 创建数据表的时候 ...

  7. SQL Server批量数据导出导入Bulk Insert使用

    简介 Bulk insert命令区别于BCP命令之处在于它是SQL server脚本语句,它可以将本地或远程的文件数据批量导入数据库,速度非常之快:远程文件必须共享才行, 文件路径须使用通用约定(UN ...

  8. SQL Server批量数据导出导入BCP使用

    BCP简介 bcp是SQL Server中负责导入导出数据的一个命令行工具,它是基于DB-Library的,并且能以并行的方式高效地导入导出大批量的数据.bcp可以将数据库的表或视图直接导出,也能通过 ...

  9. SQL Server 查看数据表占用空间大小的SQL语句

    ) ) if object_id('tempdb..#space') is not null drop table #space ),rows ),data ),index_size ),unused ...

随机推荐

  1. word2007如何进行批注

    在正常的办公或者学校撰写论文,请别人进行提出修改意见是不可避免的,在word2007中提供了批注修改模式,十分方便,给撰写文档和批阅文档的人带来了极大的方便.本节介绍如何在word2007中进行批注及 ...

  2. ①创建项目testpackage ②在pack2.B中添加方法f ③在类A中添加如下三个成员变量:int型的私有变量i float型的变量f double型的公有变量d 在pack1.B的main方法中为对象a的成员变量f和d分别赋值为2和3 在pack2.C的main方法中为对象a的成员变量d赋值为3

    package pack1; public class A { private int i; float f; public double d; public float getF() { retur ...

  3. SED入门

    使用Linux多年,SED和AWK两大神器却始终无法得心应手的来提高自己的工作效率,每每需要查找替换,都要依赖于ST2等一众图形工具,深感愧疚,乃专门抽时间学习之,志在使之真正成为左右手.   SED ...

  4. CA*Layer(CAReplicatorLayer--)

    CAReplicatorLayer (反射应用) 指定一个继承于UIView的ReflectionView,它会自动产生内容的反射效果: + (Class)layerClass//我们也可以通过重写V ...

  5. ctrl+z暂停任务

    (1) CTRL+Z挂起进程并放入后台 (2) jobs 显示当前暂停的进程 (3) bg %N 使第N个任务在后台运行(%前有空格) (4) fg %N 使第N个任务在前台运行 默认bg,fg不带% ...

  6. python_way day14 HTML

    python_way day 14 HTML 一,标签 二.特殊字符 三,css <!DOCTYPE html> <html lang="en"> < ...

  7. POJ3009 Curling

    题目链接:http://poj.org/problem?id=3009 题意:从2出发,要到达3, 0可以通过,碰到1要停止,并且1处要变成0, 并且从起点开始沿着一个方向要一直前进,直至碰到1(或者 ...

  8. git学习笔记06-创建分支合并分支-比svn快多了,因为只有指针在改变

    一开始git只有一条时间线,这个分支叫主分支,即master分支. HEAD严格来说不是指向提交,而是指向master,master才是指向提交的,所以,HEAD指向的就是当前分支. 每次提交,mas ...

  9. MVC客户端验证配置

    <appSettings> <add key="ClientValidationEnabled" value="true"/> < ...

  10. Android Studio常见问题 -- AndroidManifest.xml 覆盖问题

    问题如下 D:\source-code\AndroidStudio\MyApplication\app\src\main\AndroidManifest.xmlError:(14, 9) Attrib ...