data into a file hive SQL

The problem of copy table data into a file hive SQL query

In this blog post, we will learn how now can copy table data into a file hive SQL query. This functionality is very common in Hadoop technology. To execute this function we run the SQL query in Java Programming language. One example of the SQL query is shown below:

 

“SELECT * FROM table WHERE id > 100”

 

Now the question arises of how one can export these results to an HDFS file. To achieve this let’s loop upon some of the solutions available. 

 

 

Solution

One of the solutions is that by running the below query one can directly convert the data into the HDFS file format. 

 

 

INSERT OVERWRITE DIRECTORY '/path/to/output/dir' SELECT * FROM table WHERE id > 100;

 

There is another command available that can change the output in the format of the text file you wish for:

 

 

$hive -e "select * from table where id > 10" > ~/sample_output.txt

 

 

Another command is very effective if one wants to put the results in the tab-delimited files format. 

 

 

INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/YourTableDir'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
SELECT * FROM table WHERE id > 100;

 

 

As a last solution, one can follow the following steps to accomplish the above task:

  • Make a separate table.
  • Add information to the table.
  • Optionally drop the table later; since it’s an external table, it won’t erase that file.

 

 

Also Read: How do you get a timestamp in JavaScript?

 

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *