使用python从Twitter的推文中提取数据

使用python从Twitter的推文中提取数据

本文介绍了使用python从Twitter的推文中提取数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想提取诸如tweet id,twitter用户名,在其tweet中显示fb.me链接的用户的twitter id以及他的fb id和fb用户名之类的数据.

I want to extract data like tweet id , twitter username, twitter id of the user who has fb.me link displayed in his tweet and also his fb id and fb username.

我必须对200条这样的推文执行此操作.

I have to do this for 200 such tweets.

我的代码:

from twitter.oauth import OAuth
import json
import urllib2
from twitter import *

ckey = ''
csecret = ''
atoken = ''
asecret = ''



auth = OAuth(atoken,asecret,ckey,csecret)

t_api = Twitter(auth=auth)

search = t_api.search.tweets(q='http://on.fb.me',count=1)

print search

print 'specific data'

#print search['statuses'][0]['entities']['urls']

现在正在检索1个结果,并希望提取上述数据.

right now retrieving 1 result and want to extract the data above mentioned.

我得到的结果:

{u'search_metadata': {u'count': 1, u'completed_in': 0.021, u'max_id_str': u'542227367834685440', u'since_id_str': u'0', u'next_results': u'?max_id=542227367834685439&q=http%3A%2F%2Fon.fb.me&count=1&include_entities=1', u'refresh_url': u'?since_id=542227367834685440&q=http%3A%2F%2Fon.fb.me&include_entities=1', u'since_id': 0, u'query': u'http%3A%2F%2Fon.fb.me', u'max_id': 542227367834685440L}, u'statuses': [{u'contributors': None, u'truncated': False, u'text': u'Check out Monday Morning Cooking Club Cooking Tip Day #1 --&gt;http://t.co/j6mbg1OE6Z | http://t.co/c7qjunLQz2', u'in_reply_to_status_id': None, u'id': 542227367834685440L, u'favorite_count': 0, u'source': u'<a href="http://www.hootsuite.com" rel="nofollow">Hootsuite</a>', u'retweeted': False, u'coordinates': None, u'entities': {u'symbols': [], u'user_mentions': [], u'hashtags': [], u'urls': [{u'url': u'http://t.co/j6mbg1OE6Z', u'indices': [63, 85], u'expanded_url': u'http://on.fb.me/', u'display_url': u'on.fb.me'}, {u'url': u'http://t.co/c7qjunLQz2', u'indices': [88, 110], u'expanded_url': u'http://bit.ly/12BbG16', u'display_url': u'bit.ly/12BbG16'}]}, u'in_reply_to_screen_name': None, u'in_reply_to_user_id': None, u'retweet_count': 0, u'id_str': u'542227367834685440', u'favorited': False, u'user': {u'follow_request_sent': False, u'profile_use_background_image': True, u'profile_text_color': u'333333', u'default_profile_image': False, u'id': 226140415, u'profile_background_image_url_https': u'https://pbs.twimg.com/profile_background_images/704964581/bc37b358019be05efe1094a0d100ea53.jpeg', u'verified': False, u'profile_location': None, u'profile_image_url_https': u'https://pbs.twimg.com/profile_images/469488950050955264/FOoWjIEZ_normal.jpeg', u'profile_sidebar_fill_color': u'DDEEF6', u'entities': {u'url': {u'urls': [{u'url': u'http://t.co/sida0E6eXy', u'indices': [0, 22], u'expanded_url': u'http://www.mondaymorningcookingclub.com.au', u'display_url': u'mondaymorningcookingclub.com.au'}]}, u'description': {u'urls': []}}, u'followers_count': 1574, u'profile_sidebar_border_color': u'000000', u'id_str': u'226140415', u'profile_background_color': u'EDCDC7', u'listed_count': 50, u'is_translation_enabled': False, u'utc_offset': 39600, u'statuses_count': 12594, u'description': u"Monday Morning Cooking Club. A bunch of Sydney gals sharing and preserving the wonderful recipes of Australia's culturally diverse Jewish community.", u'friends_count': 1904, u'location': u'Sydney,  Australia', u'profile_link_color': u'C40A38', u'profile_image_url': u'http://pbs.twimg.com/profile_images/469488950050955264/FOoWjIEZ_normal.jpeg', u'following': False, u'geo_enabled': True, u'profile_banner_url': u'https://pbs.twimg.com/profile_banners/226140415/1400769931', u'profile_background_image_url': u'http://pbs.twimg.com/profile_background_images/704964581/bc37b358019be05efe1094a0d100ea53.jpeg', u'name': u'Lisa Goldberg', u'lang': u'en', u'profile_background_tile': False, u'favourites_count': 1309, u'screen_name': u'MondayMorningCC', u'notifications': False, u'url': u'http://t.co/sida0E6eXy', u'created_at': u'Mon Dec 13 12:22:13 +0000 2010', u'contributors_enabled': False, u'time_zone': u'Sydney', u'protected': False, u'default_profile': False, u'is_translator': False}, u'geo': None, u'in_reply_to_user_id_str': None, u'possibly_sensitive': False, u'lang': u'en', u'created_at': u'Tue Dec 09 08:00:53 +0000 2014', u'in_reply_to_status_id_str': None, u'place': None, u'metadata': {u'iso_language_code': u'en', u'result_type': u'recent'}}]}

能否请您帮我找回此特定数据?

can you please help me out how to retrieve this particular data?

推荐答案

您可以执行类似的操作来发出查询,然后通过使用相应的键进行查询来获取所需的数据.

You could do something like this to issue a query and afterwards get the data you want by querying with the corresponding keys.

import json
import urllib2
import twitter

ckey = 'Your consumer key'
csecret = 'your consumer secret'
atoken = 'your token'
asecret = 'your secret token'

auth = twitter.oauth.OAuth(atoken, asecret,
                           ckey, csecret)

twitter_api = twitter.Twitter(auth=auth)

q = 'http://on.fb.me'

count = 100

search_results = twitter_api.search.tweets(q=q, count=count)

statuses = search_results['statuses']

# Iterate through 5 more batches of results by following the cursor

for _ in range(5):
    print "Length of statuses", len(statuses)
    try:
        next_results = search_results['search_metadata']['next_results']
    except KeyError, e: # No more results when next_results doesn't exist
        break

    # Create a dictionary from next_results, which has the following form:
    # ?max_id=313519052523986943&q=NCAA&include_entities=1
    kwargs = dict([ kv.split('=') for kv in next_results[1:].split("&") ])

    search_results = twitter_api.search.tweets(**kwargs)
    statuses += search_results['statuses']

# Show one sample search result by slicing the list...
print json.dumps(statuses[0], indent=1)

# get relevant data into lists
user_names = [ user_mention['name']
                 for status in statuses
                     for user_mention in status['entities']['user_mentions'] ]

screen_names = [ user_mention['screen_name']
                 for status in statuses
                     for user_mention in status['entities']['user_mentions'] ]

id_str = [ user_mention['id_str']
                 for status in statuses
                     for user_mention in status['entities']['user_mentions'] ]

t_id = [ status['id']
         for status in statuses ]

# print out first 5 results
print json.dumps(screen_names[0:5], indent=1)
print json.dumps(user_names[0:5], indent=1)
print json.dumps(id_str[0:5], indent=1)
print json.dumps(t_id[0:5], indent=1)

结果:

[
 "DijalogNet",
 "Kihot_ex_of",
 "Kihot_ex_of",
 "JAsunshine1011",
 "RobertCornegyJr"
]
[
 "Dijalog Net",
 "Sa\u0161a Jankovi\u0107",
 "Sa\u0161a Jankovi\u0107",
 "Raycent Edwards",
 "Robert E Cornegy, Jr"
]
[
 "2380692464",
 "563692937",
 "563692937",
 "15920807",
 "460051837"
]
[
 542309722385580032,
 542227367834685440,
 542202885514461185,
 542201843448045568,
 542188061598437376
]

看看 有关如何使用api的更多示例.

Have a look at this site for more examples on how to use the api.

这篇关于使用python从Twitter的推文中提取数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-01 23:57