Skip to content
This repository has been archived by the owner on Dec 15, 2023. It is now read-only.

java.lang.NoClassDefFoundError: com/twitter/jsr166e/LongAdder #2

Open
satishblr opened this issue Mar 6, 2019 · 0 comments
Open

java.lang.NoClassDefFoundError: com/twitter/jsr166e/LongAdder #2

satishblr opened this issue Mar 6, 2019 · 0 comments

Comments

@satishblr
Copy link

satishblr commented Mar 6, 2019

when i run the following example on azure databricks i am getting this error

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 22.0 failed 4 times, most recent failure: Lost task 1.3 in stage 22.0 (TID 143, 10.139.64.5, executor 1): java.lang.NoClassDefFoundError: com/twitter/jsr166e/LongAdder


import org.apache.spark.sql.cassandra._

//Spark connector
import com.datastax.spark.connector._
import com.datastax.spark.connector.cql.CassandraConnector

//CosmosDB library for multiple retry
import com.microsoft.azure.cosmosdb.cassandra
import com.twitter.jsr166e.LongAdder

//Connection-related
spark.conf.set("spark.cassandra.connection.host","XXxXXXX.cassandra.cosmosdb.azure.com")
spark.conf.set("spark.cassandra.connection.port","10350")
spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
spark.conf.set("spark.cassandra.auth.username","XXXXXX")
spark.conf.set("spark.cassandra.auth.password","XXXXXXXXXXXXXXXXXXXXXXX)
spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")

//Throughput-related. You can adjust the values as needed
spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10")
spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
spark.conf.set("spark.cassandra.concurrent.reads", "512")
spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")

//Cassandra connector instance
//val cdbConnector = CassandraConnector(sc)

val collection = Seq(("01-02-12453-01", "010", 1, "250", 1), ("01-02-12453-02", "010", 1, "250", 2)).toDF("sku", "bin", "qty", "trancode", "counter")

//Review schema
collection.printSchema

//Print
collection.show

collection
  .write
  .mode("append")
  .format("org.apache.spark.sql.cassandra")
  .options(Map( "table" -> "sku", "keyspace" -> "test", "output.consistency.level" -> "ALL", "ttl" -> "10000000"))
  .save()

Am I missing something? I added (jsr166e-1.1.0.jar) jar files manually installed in the library and still no luck

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant