我正在研究一段将html表转换成csv文件的代码。我在弄清楚如何使用字符串拆分删除要打印到终端的信息之间的空白时遇到麻烦。我最好的结果是终端在信息之间打印出很大的空白,这使导航变得困难。任何信息将不胜感激。
import csv
from bs4 import BeautifulSoup
from termcolor import cprint
html = open("recallist.html").read()
soup = BeautifulSoup(html)
table = soup.find_all('div', {'id': 'PrintArea'})
output_rows = []
recals = 'recallist.csv'
cprint('READING TABLES', 'green')
for table_row in table:
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text)
output_rows.append(output_row)
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerows(output_rows)
with open(recals, 'r') as f:
contents = f.read()
for item in contents.split("Date,Customer,Phone,Cell Phone,Removal,Notes"):
for refine in item.split('",,'):
print(refine)
下列CSV范例:
Location,,,Date,Customer,Phone,Cell Phone,Removal,Notes,�,�,�,,04/29/19 | 03:00 PM,[9999] FIRST LAST,999-999-9999***,999-999-9999,,"
",,"
","
$127.92
",,04/29/19 | 03:30 PM,[123456] FIRST LAST,999-999-9999***,999-999-9999,04/13/2020,"
",,"
","
$0.02
",,04/29/19 | 04:00 PM,[123456] FIRST LAST,999-999-9999***,,09/10/2019,"
",,"
","
($212.10)
",,04/29/19 | 04:15 PM,[123456] FIRST LAST,999-999-9999***,,01/09/2020,"
",,"
","
$16.23
",,04/29/19 | 04:30 PM,[123456] FIRST LAST,999-999-9999***,,05/30/2019,"
",,"
","
$0.24
",,04/29/19 | 05:00 PM,[123456] FIRST LAST,999-999-9999***,,07/26/2019,"
",,"
","
($0.30)
",,04/29/19 | 07:00 PM,[123456] FIRST LAST,999-999-9999***,999-999-9999,11/15/2019,"
",,"
","
$0.06
",,04/29/19 | 07:30 PM,[123456] FIRST LAST,999-999-9999***,,12/12/2019,"
",,"
","
我正在尝试实现的格式:
04/29/19 | 03:00 PM,[9999] FIRST LAST,999-999-9999***,999-999-9999,$127.92
04/29/19 | 03:30 PM,[99999] FIRST LAST,999-999-9999***,999-999-9999,$0.02
ETC.
HTML样本,以备不时之需:
<tbody><tr class="alt">
<td colspan="5" align="left" style="background-color:668cd9;">Location</td>
<td colspan="5" align="left" style="background-color:668cd9;"></td>
</tr>
<tr align="left" class="GrayBLOCK">
<td></td>
<td>Date</td>
<td>Customer</td>
<td>Phone</td>
<td>Cell Phone</td>
<td>Removal</td>
<td>Notes</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr class="alt">
<td></td>
<td>04/29/19 | 03:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[9999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td></td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
$127.92
</td>
</tr>
<tr>
<td></td>
<td>04/29/19 | 03:30 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td>04/13/2020</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
$0.02
</td>
</tr>
<tr class="alt">
<td></td>
<td>04/29/19 | 04:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>09/10/2019</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
($212.10)
</td>
</tr>
<tr>
<td></td>
<td>04/29/19 | 04:15 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>01/09/2020</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
$16.23
</td>
</tr>
<tr class="alt">
<td></td>
<td>04/29/19 | 04:30 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>05/30/2019</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
$0.24
</td>
</tr>
<tr>
<td></td>
<td>04/29/19 | 05:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>07/26/2019</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
($0.30)
</td>
</tr>
<tr class="alt">
<td></td>
<td>04/29/19 | 07:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td>11/15/2019</td>
<td>
</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">
$0.06
</td>
</tr>
最佳答案
更新:我在原始帖子中发现了一个问题,这是更好的版本。空的<td>
标记会创建一些额外的列。版本1保留了这些列,版本2删除了它们,但是它非常特定于您给定的格式,并且如果更改了格式,则必须修改切片。
版本1
import csv
from bs4 import BeautifulSoup
with open("recallist.html") as f:
soup = BeautifulSoup(f.read(), features="html.parser")
rows = soup.find_all('tr')
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for row in rows:
columns = row.find_all('td')
writer.writerow([column.get_text(strip=True) for column in columns])
版本2
import csv
from bs4 import BeautifulSoup
with open("recallist.html") as f:
soup = BeautifulSoup(f.read(), features="html.parser")
rows = soup.find_all('tr')
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
#alt: 'for row in rows[2:]:' to slice off the two header rows
for row in rows:
columns = row.find_all('td')
del columns[0]
del columns[-4:-1]
writer.writerow([column.get_text(strip=True) for column in columns])
如果您的实际HTML实际上有多个带有不同列的表,则需要对其进行调整。希望能帮助到你!