本文介绍了Watson Studio ImportError:没有名为"pydotplus"的模块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用:沃森工作室带有Spark的Python 3.5Python笔记本: https://gist.github.com/anonymous/ea77f500b4fd80feb69fadb470fca235

Using:Watson StudioPython 3.5 With SparkPython Notebook: https://gist.github.com/anonymous/ea77f500b4fd80feb69fadb470fca235

此部分显示错误:

from IPython.display import Image
import pydotplus
dot_data = tree.export_graphviz(regr, out_file=None, feature_names = X_train.columns.values ,filled=True)
graph = pydotplus.graph_from_dot_data(dot_data)

给出错误:ImportError:没有名为"pydotplus"的模块

Give error:ImportError: No module named 'pydotplus'

解决方案是否确实有另一个环境已安装此模块?或者有没有办法将这个python模块安装/添加到现有的运行时中?

SolutionIs there another environment that actually has this module installed?ORIs there a way to install/add this python module to the existing runtime?

推荐答案

在IBM Cloud文档中找到了答案.

Found the answer in the IBM Cloud documentation.

https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/importing-libraries.html

在Apache Spark上安装自定义库和软件包上次更新时间:2019年3月1日2

Installing custom libraries and packages on Apache SparkLast updated: March 1, 20192

在Watson Studio中将Apache Spark与笔记本相关联时,会包含许多预安装的库.在安装库之前,请检查预装库的列表.从笔记本计算机中运行适当的命令:

When you associate Apache Spark with a notebook in Watson Studio, many preinstalled libraries are included. Before you install a library, check the list of preinstalled libraries. Run the appropriate command from a notebook cell:

Python: !pip list --isolated
R: installed.packages()

如果未列出所需的库,或者要在笔记本中使用Scala库,请使用以下各节中的步骤进行安装.库软件包的格式取决于编程语言.使用Scala库

If the library that you want is not listed, or you want to use a Scala library in a notebook, use the steps in the following sections to install it. The format for library packages depends on the programming language.To use a Scala library

Scala笔记本的库通常打包为Java™归档(JAR)文件.临时缓存库

Libraries for Scala notebooks are typically packaged as Java™ archive (JAR) files.To cache a library temporarily

Scala笔记本的库未安装到Spark服务.而是在下载它们时将其缓存,并且仅在笔记本计算机运行时可用.

The libraries for a Scala notebook are not installed to the Spark service. Instead they are cached when they are downloaded and are only available for the time that the notebook runs.

To use a single library without dependencies, from a public web server:
    Locate the publicly available URL to the library that you want to install. If you create a custom library, you can post it to any publicly available repository, such as GitHub.

    Download the library you want to use in your notebook by running the following command in a code cell:

     %AddJar URL_to_jar_file

To use a library with dependencies, from a public Maven repository:

    Add and import a library with all its dependencies by running the following command. You need the groupId, artifactId, and version of the dependency. For example:

     %AddDeps org.apache.spark spark-streaming-kafka_2.10 1.1.0 --transitive

要永久安装库

如果要使文件可用于Spark-Submit作业和Scala内核,或者想通过其他内核通过Java桥访问文件,则可以将库永久安装到〜/data/libs/.使用Python或R中的JDBC驱动程序.

You can install a library permanently to ~/data/libs/ if you want to make the files available to spark-submit jobs and Scala kernels, or want to access the files through Java bridges from other kernels, for example, to use JDBC drivers from Python or R.

已安装库到〜/data/libs/的文件路径因库所需的Scala版本而异:

The file path of the installed library to ~/data/libs/ varies depending on the Scala version that the library requires:

Use ~/data/libs/ for libraries that work with any Scala version.
Use ~/data/libs/scala-2.11/ for libraries that require Scala 2.11. The Scala kernel for Spark 2.1 uses Scala 2.11.

要安装库,请执行以下操作:

To install a library:

Locate the publicly available URL to the library that you want to install.

Download the library you want to install permanently into ~/data/libs/ by running the following command in a Python notebook:

 !(cd ~/data/libs/ ; wget URL_to_jar_file)

要安装Python库

Use the Python pip package installer command to install Python libraries to your notebook. For example, run the following command in a code cell to install the prettyplotlib library:

 !pip install --user prettyplotlib

The --user flag installs the library for personal usage rather than the global default. The installed packages can be used by all notebooks that use the same Python version in the Spark service.
Use the Python import command to import the library components. For example, run the following command in a code cell:

 import prettyplotlib as ppl

Restart the kernel.

加载R包

Use the R install.packages() function to install new R packages. For example, run the following command in a code cell to install the ggplot2 package for plotting functions:

 install.packages("ggplot2")

The imported package can be used by all R notebooks running in the Spark service.

Use the R library() function to load the installed package. For example, run the following command in a code cell:

 library("ggplot2")

You can now call plotting functions from the ggplot2 package in your notebook.

这篇关于Watson Studio ImportError:没有名为"pydotplus"的模块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 07:46