加为Q友
有网页效果图片后添加为我们QQ 为好友方便详细咨询及文件传送。
Q:20985349
div+css网页前端制作,css页面,div页面,网站前端制作,网页重构,网站重构,页面重构,divccs外包,网站外包,切图外包
发图确认
通过QQ传送页面效果图片传给我们,以便我们确定制作细节估算工作量并给予报价和约定交付时间。
div+css网页前端制作,css页面,div页面,网站前端制作,网页重构,网站重构,页面重构,divccs外包,网站外包,切图外包
下单制作
双方确认价格与制作细节无误后,通过支付宝或工行付费后我们开始页面制作。(一般页面80-150元)
div+css网页前端制作,css页面,div页面,网站前端制作,网页重构,网站重构,页面重构,divccs外包,网站外包,切图外包
完成订单
网页制作并测试完毕→压缩发送给客户→客户最终满意确认→订单完成!
div+css网页前端制作,css页面,div页面,网站前端制作,网页重构,网站重构,页面重构,divccs外包,网站外包,切图外包
我们集聚了一批网页制作编程高手,每个成员都在专业网络公司从事网页制作等技术类工作至少5年,拥有丰富的经验,对网页div切图排版等技术都有独特的见解,专门为客户提供专业的psd切图排版、psd转html、网页前端制作、网页重构、网页性能优化等服务。
这就是我们,一个高技术、高效率的网页切图排版制作团队!
时间:2017-12-06 来源:Spark记录-Spark-Shell客户端操作读取Hive数据

标签:osi   scrip   shuff   gist   onf   his   serial   rpc   tab   

3.开启hadoop服务:sh  $HADOOP_HOME/sbin/start-all.sh

6.scala操作hive(spark-sql)

scala>sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INThtml切图报价, value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘ ")//这里需要注意数据的间隔符

Spark记录-Spark-Shell客户端操作读取Hive数据

scala>sqlContext.sql(" SELECT * FROM src").collect().foreach(println)

1.拷贝hive-site.xml到spark/conf下,承接网页制作拷贝mysql-connector-java-xxx-bin.jar到hive/lib下

scala>val conf=new SparkConf().setAppName("SparkHive").setMaster("local")   //可忽略,符合w3c标准已经自动创建了

原文:http://www.cnblogs.com/xinfang520/p/7985939.html

标签:osi   scrip   shuff   gist   onf   his   serial   rpc   tab   

scala>val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

scala>sqlContext.sql("LOAD DATA INPATH ‘/user/spark/src.txt‘ INTO TABLE src ");

2.开启hive元数据服务:hive  --service metastore

5.进入spark-shell:spark-shell

4.开启spark服务:sh $SPARK_HOME/sbin/start-all.sh

scala>sc.stop()

SQL context available as sqlContext.scala> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)17/12/05 10:38:51 INFO HiveContext: Initializing execution hive,div切图排版 version 1.2.117/12/05 10:38:51 INFO ClientWrapper: Inspected Hadoop version: 2.4.017/12/05 10:38:51 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.017/12/05 10:38:51 WARN HiveConf: HiveConf of name hive.metastore.local does not exist17/12/05 10:38:51 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist17/12/05 10:38:51 WARN HiveConf: HiveConf of name hive.server2.webui.host does not exist17/12/05 10:38:51 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist17/12/05 10:38:51 INFO metastore: Mestastore configuration hive.metastore.warehouse.dir changed from file:/tmp/spark-ecfcdcc1-2bb0-4efc-aa00-96ad1dd47840/metastore to file:/tmp/spark-ea48b58b-ef90-43c0-8d5e-f54a4b4cadde/metastore17/12/05 10:38:51 INFO metastore: Mestastore configuration javax.jdo.option.ConnectionURL changed from jdbc:derby:;databaseName=/tmp/spark-ecfcdcc1-2bb0-4efc-aa00-96ad1dd47840/metastore;create=true to jdbc:derby:;databaseName=/tmp/spark-ea48b58b-ef90-43c0-8d5e-f54a4b4cadde/metastore;create=true17/12/05 10:38:51 INFO HiveMetaStore: 0: Shutting down the object store...17/12/05 10:38:51 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=Shutting down the object store...	17/12/05 10:38:51 INFO HiveMetaStore: 0: Metastore shutdown complete.17/12/05 10:38:51 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=Metastore shutdown complete.	17/12/05 10:38:51 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore17/12/05 10:38:51 INFO ObjectStore: ObjectStore,wap前端外包 initialize called17/12/05 10:38:51 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored17/12/05 10:38:51 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored17/12/05 10:38:56 WARN HiveConf: HiveConf of name hive.metastore.local does not exist17/12/05 10:38:56 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist17/12/05 10:38:56 WARN HiveConf: HiveConf of name hive.server2.webui.host does not exist17/12/05 10:38:56 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist17/12/05 10:38:56 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table承接网页前端外包,StorageDescriptor,承接网页前端外包SerDeInfo网页html切图排版,Partition,承接网页制作Databasehtml切图报价,Type,html切图报价FieldSchema团队网页接活,Order"17/12/05 10:38:57 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.17/12/05 10:38:57 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.17/12/05 10:39:01 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.17/12/05 10:39:01 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.17/12/05 10:39:01 INFO MetaStoreDirectSql: Using direct SQL,团队网页接活 underlying DB is DERBY17/12/05 10:39:01 INFO ObjectStore: Initialized ObjectStore17/12/05 10:39:01 WARN ObjectStore: Failed to get database defaulthtml前端制作, returning NoSuchObjectException17/12/05 10:39:02 INFO HiveMetaStore: Added admin role in metastore17/12/05 10:39:02 INFO HiveMetaStore: Added public role in metastore17/12/05 10:39:02 INFO HiveMetaStore: No user is added in admin role,网页外包接活 since config is empty17/12/05 10:39:02 INFO SessionState: Created local directory: /tmp/d66a519b-e512-4295-b707-0f688aa238ea_resources17/12/05 10:39:02 INFO SessionState: Created HDFS directory: /user/hive/tmp/root/d66a519b-e512-4295-b707-0f688aa238ea17/12/05 10:39:02 INFO SessionState: Created local directory: /tmp/root/d66a519b-e512-4295-b707-0f688aa238ea17/12/05 10:39:02 INFO SessionState: Created HDFS directory: /user/hive/tmp/root/d66a519b-e512-4295-b707-0f688aa238ea/_tmp_space.db17/12/05 10:39:02 WARN HiveConf: HiveConf of name hive.metastore.local does not exist17/12/05 10:39:02 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist17/12/05 10:39:02 WARN HiveConf: HiveConf of name hive.server2.webui.host does not exist17/12/05 10:39:02 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist17/12/05 10:39:02 INFO HiveContext: default warehouse location is /user/hive/warehouse17/12/05 10:39:02 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.17/12/05 10:39:02 INFO ClientWrapper: Inspected Hadoop version: 2.4.017/12/05 10:39:03 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.017/12/05 10:39:07 WARN HiveConf: HiveConf of name hive.metastore.local does not exist17/12/05 10:39:07 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist17/12/05 10:39:07 WARN HiveConf: HiveConf of name hive.server2.webui.host does not exist17/12/05 10:39:07 WARN HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist17/12/05 10:39:08 INFO metastore: Trying to connect to metastore with URI thrift://192.168.66.66:908317/12/05 10:39:08 INFO metastore: Connected to metastore.17/12/05 10:39:10 INFO SessionState: Created local directory: /tmp/4989df94-ba31-4ef6-ab78-369043e2067e_resources17/12/05 10:39:10 INFO SessionState: Created HDFS directory: /user/hive/tmp/root/4989df94-ba31-4ef6-ab78-369043e2067e17/12/05 10:39:10 INFO SessionState: Created local directory: /tmp/root/4989df94-ba31-4ef6-ab78-369043e2067e17/12/05 10:39:10 INFO SessionState: Created HDFS directory: /user/hive/tmp/root/4989df94-ba31-4ef6-ab78-369043e2067e/_tmp_space.dbsqlContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@3be94b12scala> sqlContext.sql("use siat")17/12/05 10:39:36 INFO ParseDriver: Parsing command: use siat17/12/05 10:39:41 INFO ParseDriver: Parse Completed17/12/05 10:39:44 INFO PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:44 INFO PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:44 INFO PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:45 INFO PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:45 INFO ParseDriver: Parsing command: use siat17/12/05 10:39:49 INFO ParseDriver: Parse Completed17/12/05 10:39:50 INFO PerfLogger: </PERFLOG method=parse start=1512441585044 end=1512441590042 duration=4998 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:50 INFO PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:51 INFO Driver: Semantic Analysis Completed17/12/05 10:39:51 INFO PerfLogger: </PERFLOG method=semanticAnalyze start=1512441590188 end=1512441591560 duration=1372 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:51 INFO Driver: Returning Hive schema: Schema(fieldSchemas:null承接网页前端外包, properties:null)17/12/05 10:39:51 INFO PerfLogger: </PERFLOG method=compile start=1512441584491 end=1512441591758 duration=7267 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:51 INFO Driver: Concurrency mode is disabled,承接网页前端外包 not creating a lock manager17/12/05 10:39:51 INFO PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:51 INFO Driver: Starting command(queryId=root_20171205103945_2f994f07-9e52-456b-97ee-d03e722116ff): use siat17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=TimeToSubmit start=1512441584488 end=1512441592212 duration=7724 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO Driver: Starting task [Stage-0:DDL] in serial mode17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=runTasks start=1512441592212 end=1512441592496 duration=284 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=Driver.execute start=1512441591760 end=1512441592497 duration=737 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO Driver: OK17/12/05 10:39:52 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441592571 end=1512441592571 duration=0 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=Driver.run start=1512441584478 end=1512441592571 duration=8093 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:39:52 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441592612 end=1512441592613 duration=1 from=org.apache.hadoop.hive.ql.Driver>res0: org.apache.spark.sql.DataFrame = [result: string]scala> sqlContext.sql("drop table src")17/12/05 10:40:13 INFO ParseDriver: Parsing command: drop table src17/12/05 10:40:13 INFO ParseDriver: Parse Completed17/12/05 10:40:17 INFO PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:17 INFO PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:17 INFO PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:17 INFO PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:17 INFO ParseDriver: Parsing command: DROP TABLE src17/12/05 10:40:17 INFO ParseDriver: Parse Completed17/12/05 10:40:17 INFO PerfLogger: </PERFLOG method=parse start=1512441617979 end=1512441617998 duration=19 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:17 INFO PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO Driver: Semantic Analysis Completed17/12/05 10:40:19 INFO PerfLogger: </PERFLOG method=semanticAnalyze start=1512441617999 end=1512441619115 duration=1116 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO Driver: Returning Hive schema: Schema(fieldSchemas:null网页html切图排版, properties:null)17/12/05 10:40:19 INFO PerfLogger: </PERFLOG method=compile start=1512441617977 end=1512441619116 duration=1139 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO Hive: Dumping metastore api call timing information for : compilation phase17/12/05 10:40:19 INFO Hive: Total time spent in this metastore function was greater than 1000ms : getTable_(String,网页html切图排版 Stringhtml切图报价, )=399917/12/05 10:40:19 INFO Driver: Concurrency mode is disabled,web切图报价 not creating a lock manager17/12/05 10:40:19 INFO PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO Driver: Starting command(queryId=root_20171205104017_dd3db388-5058-4af4-9076-90035b4837d9): DROP TABLE src17/12/05 10:40:19 INFO PerfLogger: </PERFLOG method=TimeToSubmit start=1512441617977 end=1512441619119 duration=1142 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:40:19 INFO Driver: Starting task [Stage-0:DDL] in serial mode17/12/05 10:41:04 INFO PerfLogger: </PERFLOG method=runTasks start=1512441619119 end=1512441664030 duration=44911 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO Hive: Dumping metastore api call timing information for : execution phase17/12/05 10:41:04 INFO Hive: Total time spent in this metastore function was greater than 1000ms : dropTable_(String团队网页接活, String,团队网页接活 booleanhtml前端制作, boolean,wap前端外包 boolean承接网页前端外包, )=4426617/12/05 10:41:04 INFO PerfLogger: </PERFLOG method=Driver.execute start=1512441619118 end=1512441664031 duration=44913 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO Driver: OK17/12/05 10:41:04 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441664032 end=1512441664032 duration=0 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO PerfLogger: </PERFLOG method=Driver.run start=1512441617976 end=1512441664051 duration=46075 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:04 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441664054 end=1512441664054 duration=0 from=org.apache.hadoop.hive.ql.Driver>res1: org.apache.spark.sql.DataFrame = []scala> sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT,web前端制作 value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘ ")  17/12/05 10:41:57 INFO ParseDriver: Parsing command: CREATE TABLE IF NOT EXISTS src (key INT网页html切图排版, value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘	‘17/12/05 10:41:57 INFO ParseDriver: Parse Completed17/12/05 10:41:57 INFO PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:57 INFO PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:57 INFO PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:57 INFO PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:57 INFO ParseDriver: Parsing command: CREATE TABLE IF NOT EXISTS src (key INT,网页html切图排版 value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘	‘17/12/05 10:41:57 INFO ParseDriver: Parse Completed17/12/05 10:41:57 INFO PerfLogger: </PERFLOG method=parse start=1512441717568 end=1512441717619 duration=51 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:57 INFO PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO CalcitePlanner: Starting Semantic Analysis17/12/05 10:41:58 INFO CalcitePlanner: Creating table siat.src position=2717/12/05 10:41:58 INFO Driver: Semantic Analysis Completed17/12/05 10:41:58 INFO PerfLogger: </PERFLOG method=semanticAnalyze start=1512441717619 end=1512441718637 duration=1018 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO Driver: Returning Hive schema: Schema(fieldSchemas:nullhtml切图报价, properties:null)17/12/05 10:41:58 INFO PerfLogger: </PERFLOG method=compile start=1512441717565 end=1512441718637 duration=1072 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO Driver: Concurrency mode is disabled,html切图报价 not creating a lock manager17/12/05 10:41:58 INFO PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO Driver: Starting command(queryId=root_20171205104157_e9b5ed54-e7dc-448a-984c-6d5cb37f964f): CREATE TABLE IF NOT EXISTS src (key INT团队网页接活, value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘	‘17/12/05 10:41:58 INFO PerfLogger: </PERFLOG method=TimeToSubmit start=1512441717565 end=1512441718735 duration=1170 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:41:58 INFO Driver: Starting task [Stage-0:DDL] in serial mode17/12/05 10:42:01 INFO PerfLogger: </PERFLOG method=runTasks start=1512441718735 end=1512441721846 duration=3111 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO Hive: Dumping metastore api call timing information for : execution phase17/12/05 10:42:01 INFO Hive: Total time spent in this metastore function was greater than 1000ms : createTable_(Table,符合w3c标准 )=243117/12/05 10:42:01 INFO PerfLogger: </PERFLOG method=Driver.execute start=1512441718638 end=1512441721849 duration=3211 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO Driver: OK17/12/05 10:42:01 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441721852 end=1512441721882 duration=30 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO PerfLogger: </PERFLOG method=Driver.run start=1512441717564 end=1512441721883 duration=4319 from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>17/12/05 10:42:01 INFO PerfLogger: </PERFLOG method=releaseLocks start=1512441721883 end=1512441721883 duration=0 from=org.apache.hadoop.hive.ql.Driver>res2: org.apache.spark.sql.DataFrame = [result: string]scala> sqlContext.sql("select * from src").collect().foreach(println)17/12/05 10:42:54 INFO ParseDriver: Parsing command: select * from src17/12/05 10:42:54 INFO ParseDriver: Parse Completed17/12/05 10:42:56 INFO deprecation: mapred.map.tasks is deprecated. Insteadhtml前端制作, use mapreduce.job.maps17/12/05 10:42:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 467.6 KB,wap前端外包 free 142.8 MB)17/12/05 10:43:02 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 40.5 KB承接网页前端外包, free 142.8 MB)17/12/05 10:43:02 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.66.66:36024 (size: 40.5 KB,承接网页前端外包 free: 143.2 MB)17/12/05 10:43:02 INFO SparkContext: Created broadcast 0 from collect at <console>:3017/12/05 10:43:04 INFO FileInputFormat: Total input paths to process : 017/12/05 10:43:04 INFO SparkContext: Starting job: collect at <console>:3017/12/05 10:43:04 INFO DAGScheduler: Job 0 finished: collect at <console>:30网页html切图排版, took 0.043396 sscala> val res=sqlContext.sql("select * from src").collect().foreach(println)17/12/05 10:43:25 INFO ParseDriver: Parsing command: select * from src17/12/05 10:43:25 INFO ParseDriver: Parse Completed17/12/05 10:43:26 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 467.6 KB,承接网页制作 free 142.3 MB)17/12/05 10:43:27 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 40.5 KBhtml切图报价, free 142.3 MB)17/12/05 10:43:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.66.66:36024 (size: 40.5 KB,html切图报价 free: 143.2 MB)17/12/05 10:43:27 INFO SparkContext: Created broadcast 1 from collect at <console>:2917/12/05 10:43:27 INFO FileInputFormat: Total input paths to process : 017/12/05 10:43:27 INFO SparkContext: Starting job: collect at <console>:2917/12/05 10:43:27 INFO DAGScheduler: Job 1 finished: collect at <console>:29团队网页接活, took 0.000062 sscala> resscala> val res=sqlContext.sql("select count(*) from src").collect().foreach(println)17/12/05 10:43:47 INFO ParseDriver: Parsing command: select count(*) from src17/12/05 10:43:47 INFO ParseDriver: Parse Completed17/12/05 10:43:48 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 467.0 KB,团队网页接活 free 141.8 MB)17/12/05 10:43:48 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 40.4 KBhtml前端制作, free 141.8 MB)17/12/05 10:43:48 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.66.66:36024 (size: 40.4 KB,网页外包接活 free: 143.1 MB)17/12/05 10:43:48 INFO SparkContext: Created broadcast 2 from collect at <console>:2917/12/05 10:43:49 INFO FileInputFormat: Total input paths to process : 017/12/05 10:43:49 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 192.168.66.66:36024 in memory (size: 40.5 KB承接网页前端外包, free: 143.2 MB)17/12/05 10:43:49 INFO SparkContext: Starting job: collect at <console>:2917/12/05 10:43:49 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.66.66:36024 in memory (size: 40.5 KB,承接网页前端外包 free: 143.2 MB)17/12/05 10:43:49 INFO DAGScheduler: Registering RDD 15 (collect at <console>:29)17/12/05 10:43:49 INFO DAGScheduler: Got job 2 (collect at <console>:29) with 1 output partitions17/12/05 10:43:49 INFO DAGScheduler: Final stage: ResultStage 1 (collect at <console>:29)17/12/05 10:43:49 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)17/12/05 10:43:49 INFO DAGScheduler: Missing parents: List()17/12/05 10:43:49 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[18] at collect at <console>:29)网页html切图排版, which has no missing parents17/12/05 10:43:49 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 12.0 KB,网页html切图排版 free 142.7 MB)17/12/05 10:43:49 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 6.0 KBhtml切图报价, free 142.7 MB)17/12/05 10:43:49 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 192.168.66.66:36024 (size: 6.0 KB,web切图报价 free: 143.2 MB)17/12/05 10:43:49 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:100617/12/05 10:43:49 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[18] at collect at <console>:29)17/12/05 10:43:49 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks17/12/05 10:44:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:44:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:44:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:44:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:45:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:45:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:45:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:45:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:45:57 INFO AppClient$ClientEndpoint: Executor added: app-20171205103712-0001/0 on worker-20171204180628-192.168.66.66-7078 (192.168.66.66:7078) with 2 cores17/12/05 10:45:57 INFO SparkDeploySchedulerBackend: Granted executor ID app-20171205103712-0001/0 on hostPort 192.168.66.66:7078 with 2 cores团队网页接活, 512.0 MB RAM17/12/05 10:45:59 INFO AppClient$ClientEndpoint: Executor updated: app-20171205103712-0001/0 is now RUNNING17/12/05 10:46:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:46:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:46:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources17/12/05 10:46:46 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (xinfang:10363) with ID 017/12/05 10:46:47 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 0,团队网页接活 xinfanghtml前端制作, partition 0,wap前端外包PROCESS_LOCAL承接网页前端外包, 1999 bytes)17/12/05 10:46:48 INFO BlockManagerMasterEndpoint: Registering block manager xinfang:34620 with 143.3 MB RAM,web前端制作 BlockManagerId(0网页html切图排版, xinfang,网页html切图排版 34620)17/12/05 10:46:51 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on xinfang:34620 (size: 6.0 KBhtml切图报价, free: 143.2 MB)17/12/05 10:47:07 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to xinfang:1036317/12/05 10:47:08 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 82 bytes17/12/05 10:47:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 0) in 27243 ms on xinfang (1/1)17/12/05 10:47:14 INFO TaskSchedulerImpl: Removed TaskSet 1.0,html切图报价 whose tasks have all completed团队网页接活, from pool 17/12/05 10:47:14 INFO DAGScheduler: ResultStage 1 (collect at <console>:29) finished in 204.228 s17/12/05 10:47:14 INFO DAGScheduler: Job 2 finished: collect at <console>:29,符合w3c标准 took 204.785107 s[0]scala> resscala> sc.stop()17/12/05 10:48:32 INFO SparkUI: Stopped Spark web UI at http://192.168.66.66:404117/12/05 10:48:35 INFO SparkDeploySchedulerBackend: Shutting down all executors17/12/05 10:48:35 INFO SparkDeploySchedulerBackend: Asking each executor to shut down17/12/05 10:48:35 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!17/12/05 10:48:36 INFO MemoryStore: MemoryStore cleared17/12/05 10:48:36 INFO BlockManager: BlockManager stopped17/12/05 10:48:36 INFO BlockManagerMaster: BlockManagerMaster stopped17/12/05 10:48:36 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!17/12/05 10:48:36 INFO SparkContext: Successfully stopped SparkContextscala> 17/12/05 10:48:36 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.17/12/05 10:48:36 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.17/12/05 10:48:38 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

  

scala>val sc=new SparkContext(conf)  //可忽略html前端制作,已经自动创建了

点击次数:27220
作者:
网页DIV+CSS切图重构后在搜索引擎推广的优势2013年05月15日DIV+CSS制作切图价格怎么计算?2013年05月15日需要提供什么样资料文件切图2013年05月15日CSS网页切图可以长期合作吗?2013年05月15日ClickOnce部署Winform程序的方方面面2018年01月22日VBScript基础篇2018年01月22日【leetcode299】299.BullsandCows2018年01月22日Ubuntu16.04设定ip2018年01月22日关于代码风格2018年01月22日第2课有符号和无符号数2018年01月22日LVS负载均衡简单配置2018年01月22日Pandas系列之入门篇——HDF52018年01月22日SDN2017第四次作业2018年01月22日djangomariadbfiltermonth失败2018年01月22日<JAVA-控制语句>2018年01月22日javascript|制作画板2018年01月22日poj2100(尺取法)2018年01月22日php信号处理2018年01月22日爬虫简单入门-接口寻找调用2018年01月22日MySqlUPDATE更新语句备忘2018年01月22日Spider_Man_3のMongodb_安装2018年01月22日JavaWeb增删改查功能2018年01月22日自动登录TP-LINK路由器,获取所有信息,重启等等,实用方法2018年01月22日python基础教程2018年01月22日MATLAB_01202018年01月22日【转】深入探讨C语言中局部变量与全局变量的作用域与存储类别2018年01月22日安装protobuf可能遇到的问题2018年01月22日linux下如何安装jdk并配置环境2018年01月22日November01st,2017Week44thWednesday2018年01月22日2018-1-21复习2018年01月22日【转载】让EMACS拥有sublimemonokai的外观monokai-theme.el颜色2016年07月15日jenkins中windows节点设置开机自启动slave-agent2017年04月25日如何找出数组中出现奇数次的元素2016年07月15日DML、DDL、DCL区别2016年07月15日微信小程序之页面拦截器2017年09月07日最长不重复子串【编程语言】2015年05月15日gitalias2017年06月03日C-AorBEqualsC2017年07月21日SQL语言基础2017年12月24日《构建之法》学习(6)——敏捷流程2017年05月21日Linux之pureftp的部署和优化2016年12月07日javaJDBC操作MySQL数据库2016年07月26日运维高考题2017年02月04日Android适配方案小结(一) 【云计算】2014年11月11日由三星Note手机变砖后的一个想法 【编程语言】2015年02月13日读Zepto源码之内部方法【转载】2017年04月13日【leetcode】FactorialTrailingZeroes 【互联网】2015年01月04日【Leetcode】sortcolors颜色排序2017年07月02日html5本地存储(三)---本地数据库indexedDB2017年05月11日CodeforcesBetaRound#7--D.PalindromeDegree(hash)【研发管理】2015年03月13日转:尽己力,无愧于心存储过程2017年02月15日BZOJ1208【HNOI2004】宠物收养所2018年01月14日phpredis操作详解 【系统运维】2014年11月19日MongoDB基本用法2017年04月02日最大子数组和新的解法-前缀和 【编程语言】2015年06月18日TextView添加下划线的简单方法【综合】2015年01月21日03基础-AngularJS基础教程 【数据库】2015年04月01日参元求己步新七才深采响2018年01月07日codevs1166矩阵取数游戏2017年06月18日Python全栈开发【基础三】2016年12月01日
系统程序框架加载中.....
关闭 [ X ] 个人网站制作,网站制作兼职,网页制作,个人做网站,个人做网页,做网页兼职,承接网页制作,网站程序制作,网站页面制作,div+css网页制作,css页面,div页面,网页前端制作,网站前端制作,网页重构,网站重构,页面重构,divccs外包,网站外包,切图外包