How to install Shiny Server with R.

Share on Google+Share on FacebookShare on LinkedInPin on PinterestTweet about this on TwitterEmail this to someone

Shiny Server enables users to host and manage Shiny applications on the Internet. Shiny is an R package that uses a reactive programming model to simplify the development of R-powered web applications.
For Shiny Server installation, First you need to check R installed on your system.

$ R

1

if you get R prompt then R installed on your system. Like:

2

If you are getting error then first install R.

$ sudo apt-get install r-base

First we need to install the Shiny R package before installing Shiny Server. Command for install Shiny R packages.

$ sudo su – \
-c “R -e \”install.packages(‘shiny’, repos=’https://cran.rstudio.com/’)\””

3

Once you’ve installed R and the Shiny package, execute the following commands in a terminal window to install gdebi (which is used to install Shiny Server and all of its dependencies) and Shiny Server.

$ sudo apt-get install gdebi-core

4

$ wget https://download3.rstudio.org/ubuntu-12.04/x86_64/shiny-server-1.5.3.838-amd64.deb

5

$ sudo gdebi shiny-server-1.5.3.838-amd64.deb

6

Once installed, you need to learn how to manage and configure Shiny Server.

 

Read More

How to install & Configure Apache Thrift in Ubuntu 16.04

Share on Google+Share on FacebookShare on LinkedInPin on PinterestTweet about this on TwitterEmail this to someone

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.

Installation Step by Step Guide:

  1. To use with Java you will also need to install Apache Ant
    >> sudo aptget install ant
  2. Installing required tools and libraries:
    >> sudo apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libboost-filesystem-dev libboost-thread-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev
  3. Download Thrift from git>> git clone
    https://git-wip-us.apache.org/repos/asf/thrift.git thrift
    >> cd thrift
  4. For an Ubuntu linux distribution you just need to go to the thrift directory and type:
    >> ./bootstrap.sh>> ./configure
  5. On the same directory run make to build Thrift
    >> sudo make
  6. (Optional) Run the test suite if you want
    >> sudo make check
  7. And finally you are ready to install Thrift by running
    >> sudo make install
  • Verify installation:
    Now your Thrift installation is completed! To verify that you have successfully installed Thrift just type>> thrift –version

Read More

Apache Sqoop Important Commands

Share on Google+Share on FacebookShare on LinkedInPin on PinterestTweet about this on TwitterEmail this to someone

1. Checking the version of your Sqoop

sqoop version

2. Getting the help from Sqoop about its Sub-Utilities

sqoop help

3. Simple import of data

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –username root -P -m 1

4. Specifying the target dir. as /user/tardir

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –username root –target-dir /user/tardir -P -m 1

5. Specifying the warehouse dir. as /ware/

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –username root –warehouse-dir /ware/ -P -m 1

6. Using the same warehouse dir (/ware) for the next import.

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName

–username root –warehouse-dir /ware/ -P -m 1

7. Specifying only the subset of data

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –where “marks = ’70′” –username root –target-dir /subsetdata -P -m 1

8. Specifying only the subset of data

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –where “marks > ’70′” –username root –target-dir /subsetdata -P -m 1

9. Importing tables without assigning number of maptasks

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –username root -P –warehouse-dir /ware

10. Importing all your tables

sqoop import-all-tables –connect jdbc:mysql://localhost/dbName –username root –target-dir /alltables -P -m 1

11. Importing all the tables from the databases

sqoop import-all-tables –connect jdbc:mysql://localhost/db 50 –username root -P –warehouse-dir /dbName

12. Importing all but few tables from the databases

sqoop import-all-tables –connect jdbc:mysql://localhost/dbName –username root -P –exclude-tables tableName –warehouse-dir /dbName1

13. Listing all the databases

sqoop list-databases –connect jdbc:mysql://localhost –username root -P

14. Listing all the tables of a database

sqoop list-tables –connect jdbc:mysql://localhost/dbName –username root -P

15. Incremental import

sqoop import –connect jdbc:mysql://localhost/dbName –username root -P –table tableName –incremental append –check-column columnName –last-value 1 –target-dir /increment2

16. Importing the table data with the Date Reference.

sqoop import –connect jdbc:mysql://localhost/dbName –username root -P –table tableName –incremental lastmodified –check-column last_update_date

–last-value “2015-05-4 01:05:14” –target-dir /date2incre

note : specify the data column with the correct format in the table.

Importing the table data with JOIN function

sqoop import –connect jdbc:mysql://localhost/dbName –username root -P –query ‘SELECT tableName.P_Id, tableName1.city, FROM tableName JOIN tableName

USING (P_Id) WHERE $CONDITIONS’ –split-by P_Id –target-dir /join

17. The password case

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName –username root –password root

18. Using the password_file in Sqoop Import

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName

–username root –password-file root –target-dir /passfile

19. Sqoop import as sequence file format

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName2

–username root -P –as-sequencefile –target-dir /targetDir

Sqoop import as avro_file:sequence file format

sqoop import –connect jdbc:mysql://localhost/dbName –table tableName2

–username root -P –as-sequencefile –target-dir /avro

Sqoop Command To List data from table

sqoop eval –connect jdbc:mysql://localhost/dbName –username test –query “select * from emp” -P;

20. Sqoop Command To Insert data into Mysql emp table

sqoop eval –connect jdbc:mysql://localhost/dbName –username test –query “insert into emp values(4,’ZMO’)” -P;

21. Sqoop Command To Truncate table

sqoop eval –connect jdbc:mysql://localhost/test1 –username test –query “truncate table mytable_test” -P;

22. Sqoop Command To Update data in Mysql

sqoop eval –connect jdbc:mysql://localhost/dbName –username test –query “update emp set id=104 where id=4” -P;

23. Sqoop Command To Delete record from mysql

sqoop eval –connect jdbc:mysql://localhost/test1 –username test –query “delete from mytable2 where id=102” -P;

24. Sqoop Command To Import Column Specific data

sqoop import –connect jdbc:mysql://localhost/test –table EMPLOYEES –columns “employee_id,first_name,last_name,job_title”

25. Sqoop Command To Validat connection

sqoop import –connect jdbc:mysql://localhost/test1 –table mytable2 –validate –username test -P;

26. Sqoop Command To Export the data from hive to mysql (target table must already exist in the database)

sqoop export –connect jdbc:mysql://localhost/dbName –table tableName –export-dir /user/dir/ –username test -P;

27. Sqoop Command To Import the data to Hive:

sqoop import –connect jdbc:mysql://localhost/test1 –table mytable2 –hive-import –target-dir /home/user –username test -P;

28. Sqoop Command To Create table in hive using mysql schema

sqoop create-hive-table –connect jdbc:mysql://localhost/test1 –table mytable3 –hive-table tab3 –username test -P;

29. Sqoop Command To Fast MySQL imports using(DIRECT)

sqoop import –connect jdbc:mysql://localhost/sqoop_test –table stocks –direct

30. Sqoop Command To Merge

sqoop import –connect jdbc:mysql://localhost/test –table emp –username hive -password hive –incremental lastmodified –merge-key empid –check-column cr_date –target-dir /sqoop/empdata/

31. Overwrite the data in hive using sqoop

sqoop import –connect jdbc:mysql://localhost/test –username root –password ‘hr’ –table sample –hive-import –hive-overwrite –hive-table sqoophive -m 1 –fields-terminated-by ‘\t’ –lines-terminated-by ‘\n’

sqoop import –connect jdbc:mysql://localhost/test –username root –password ‘hr’ –table sample –hive-overwrite –hive-table sqoophive -m 1 –fields-terminated-by ‘\t’ –lines-terminated-by ‘\n’

32. Sqoop Command To Compression

sqoop import –connect jdbc:mysql://localhost/test1 –username test –password test

–table TableA -m 5 -split-by ID –hive-table output.TableACompress –hive-import –compress –compression-codec org.apache.hadoop.io.compress.SnappyCodec –schema schemaname –as-sequencefile

33. Sqoop Export command will works in two ways

1. Insert- insert mode will insert the new records from HDFS to RDBMS table.

2. Update- update mode will update the records in the RDBMS from HDFS data.

Insert Export:

sqoop export –connect jdbc:mysql://localhost/sqoop_export –table tableNamedent_exported –export-dir /sqoop/newtableNamedent/part-m-00000

34. Update export:

sqoop export –connect jdbc:mysql://localhost/sqoop_export –export-dir /sqoop/emp_last/part-m-00000 –update-key id

Update mode(Updateonly)

sqoop export –connect jdbc:mysql://localhost/dbName –table tableName –username test –direct –update-key id –update-mode updateonly –export-dir /home/shivank/static/dept=IT –P;

35. Update mode (Upsert Mode)-insert if does not exist, update if exists.

sqoop export –connect jdbc:mysql://localhost/employees –username test –table employees_export –update-key emp_no –update-mode allowinsert

–export-dir /user/airawat/sqoop-mysql/Employees

36. Exporting non-partitioned Hive table into mysql

Source: hive-table departments_mysql

Destination: mysql-table departments_export_hive

sqoop export –connect jdbc:mysql://airawat-mysqlserver-node/employees –username myUID –table departments_export_hive –direct –enclosed-by ‘\”‘ –export-dir /user/hive/warehouse/departments_mysql

37. Exporting Partitioned Hive table into Mysql

sqoop export –connect jdbc:mysql://localhost/employees –username myUID –table employees_export_hive –direct –enclosed-by ‘\”‘ –export-dir /user/hive/warehouse/employees_import_parts/gender=M

sqoop import –connect jdbc:mysql://localhost/employees –username myUID –query ‘select emp_no,birth_date,first_name,last_name,hire_date from employees where gender=”M” AND $CONDITIONS’ –direct –split-by emp_no–hive-import –create-hive-table –hive-table employees_import_parts –hive-partition-key gender –hive-partition-value ‘M’ –optionally-enclosed-by ‘\”‘ –target-dir /user/hive/warehouse/employee-parts

38. Sqoop insert into Partitioned Table

sqoop import –connect jdbc:oracle:thin:@//jfadboc1.jfa.unibet.com:1521/xxx –username xxx –password xxx –table DW_FACT_PUNTER_TEMP –split-by TIME_KEY –target-dir unica/data/DW_FACT_PUNTER_TEST –hive-import –hive-overwrite –hive-drop-import-delims –null-non-string ‘\\N’ –null-string ‘\\N’ –hive-table unica.DW_FACT_PUNTER_TEST_TEMP –hive-partition-key “TIME_YEAR”, “TIME_MONTH”,”TIME_DAY”

39. Creating saved jobs

sqoop job –create myjob — import –connect jdbc:mysql://example.com/db –table mytable

40. Listing Saved jobs

sqoop job –list

41. Executing Job

sqoop job –exec myjob — –username someuser –P

42. Codegen

sqoop codegen –connect jdbc:mysql://localhost/userdb –username root –table emp -P;

Read More

Apache Drill Installation, Storage Plugins Configuration for HBASE, HIVE, CP and DFS for beginners.

Share on Google+Share on FacebookShare on LinkedInPin on PinterestTweet about this on TwitterEmail this to someone

Introduction

Drill is a very useful query engine it provide the facility to use it for multi-purpose. Apache drill definition is “Apache Drill is a low latency distributed query engine for large-scale datasets, including structured and semi-structured/nested data. Drill is designed to scale to several thousands of nodes and query petabytes of data at interactive speeds that BI/Analytics environments require.”

Drill is also beneficial for short, interactive ad-hoc queries on large-scale data sets. Drill is capable of querying nested data in formats like JSON and Parquet and performing dynamic schema discovery. Drill does not require a centralized metadata repository in Hadoop.
(more…)

Read More

How to Connect to Drill via JDBC fails with IllegalStateException (java.lang.IllegalStateException: No DrillbitEndpoint can be found)

Share on Google+Share on FacebookShare on LinkedInPin on PinterestTweet about this on TwitterEmail this to someone

This  is my own  Generated code which I am using to connect to Drill via JDBC and query  *.csv file on my cloudera sandbox.

Code:

try{
Class.forName(“org.apache.drill.jdbc.Driver”);
Connection connection =DriverManager.getConnection(“jdbc:drill:zk=localhost:2181″,”root”,”cloudera”);
Statement st = connection.createStatement();
ResultSet rs = st.executeQuery(“SELECT * from dfs.`/home/cloudera/Desktop/test.csv`”);
while(rs.next()){
System.out.println(rs.getString(1));
…..
}
}catch(Exception e){
//Handle errors
e.printStackTrace();
}
(more…)

Read More