在项目中间,在我的Spark sql查询中调用一个函数后,我得到了波纹管错误
我已经编写了一个用户定义函数,它将接受两个字符串,并在连接后将它们连接起来,它将最正确的5个子字符串长度取决于总字符串长度(sql server的right(string,integer)的替代方法)
from pyspark.sql.types import*
def concatstring(xstring, ystring):
newvalstring = xstring+ystring
print newvalstring
if(len(newvalstring)==6):
stringvalue=newvalstring[1:6]
return stringvalue
if(len(newvalstring)==7):
stringvalue1=newvalstring[2:7]
return stringvalue1
else:
return '99999'
spark.udf.register ('rightconcat', lambda x,y:concatstring(x,y), StringType())
它可以单独很好地工作。现在,当我在我的spark sql查询中将其作为列传递时,发生了此异常
查询是
书面查询是
spark.sql("select d.BldgID,d.LeaseID,d.SuiteID,coalesce(BLDG.BLDGNAME,('select EmptyDefault from EmptyDefault')) as LeaseBldgName,coalesce(l.OCCPNAME,('select EmptyDefault from EmptyDefault'))as LeaseOccupantName, coalesce(l.DBA, ('select EmptyDefault from EmptyDefault')) as LeaseDBA, coalesce(l.CONTNAME, ('select EmptyDefault from EmptyDefault')) as LeaseContact,coalesce(l.PHONENO1, '')as LeasePhone1,coalesce(l.PHONENO2, '')as LeasePhone2,coalesce(l.NAME, '') as LeaseName,coalesce(l.ADDRESS, '') as LeaseAddress1,coalesce(l.ADDRESS2,'') as LeaseAddress2,coalesce(l.CITY, '')as LeaseCity, coalesce(l.STATE, ('select EmptyDefault from EmptyDefault'))as LeaseState,coalesce(l.ZIPCODE, '')as LeaseZip, coalesce(l.ATTENT, '') as LeaseAttention,coalesce(l.TTYPID, ('select EmptyDefault from EmptyDefault'))as LeaseTenantType,coalesce(TTYP.TTYPNAME, ('select EmptyDefault from EmptyDefault'))as LeaseTenantTypeName,l.OCCPSTAT as LeaseCurrentOccupancyStatus,l.EXECDATE as LeaseExecDate, l.RENTSTRT as LeaseRentStartDate,l.OCCUPNCY as LeaseOccupancyDate,l.BEGINDATE as LeaseBeginDate,l.EXPIR as LeaseExpiryDate,l.VACATE as LeaseVacateDate,coalesce(l.STORECAT, (select EmptyDefault from EmptyDefault)) as LeaseStoreCategory ,rightconcat('00000',cast(coalesce(SCAT.SORTSEQ,99999) as string)) as LeaseStoreCategorySortID from Dim_CMLease_primer d join LEAS l on l.BLDGID=d.BldgID and l.LEASID=d.LeaseID left outer join SUIT on SUIT.BLDGID=l.BLDGID and SUIT.SUITID=l.SUITID left outer join BLDG on BLDG.BLDGID= l.BLDGID left outer join SCAT on SCAT.STORCAT=l.STORECAT left outer join TTYP on TTYP.TTYPID = l.TTYPID").show()
我已经上传了查询,并在这里查询状态之后。
我该如何解决这个问题。请指导我
最佳答案
最简单的尝试是增加Spark执行程序的内存:
spark.executor.memory=6g
确保您正在使用所有可用的内存。您可以在用户界面中进行检查。
更新1 --conf spark.executor.extrajavaoptions="Option"
,您可以传递-Xmx1024m
作为选项。
您当前的spark.driver.memory
和spark.executor.memory
是什么?
增加它们应该可以解决问题。
请记住,根据spark文档:
更新2
由于GC开销错误是垃圾回收问题,因此也建议阅读此出色的answer
关于apache-spark - 出现OutofMemoryError-GC开销限制超出pyspark中的限制,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/40992104/