有没有办法提高循环速度

有没有办法提高循环速度

本文介绍了有没有办法提高循环速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要发布一系列XML元素(每个调用的数量可能不同).我正在Python 3.9.1上使用Requests v2.25.1.尽管我的解决方案有效,但请求模块显示完成 r.elapsed 不到1秒,执行该过程大约需要27秒钟.我已验证的事情:

I am needing to POST a series of XML elements (could be a varying amount per call). I am using Requests v2.25.1 on Python 3.9.1. While my solution works, it is taking roughly 27 seconds to execute although the requests module shows an r.elapsed of under 1 second to complete. Things I have validated:

  • r.text 编码问题不存在
  • 标题信息正确
  • 创建 requests.Session()不会缩短响应时间
  • r.text encoding issues do not exist
  • header information is correct
  • creating requests.Session() does not improve response time

运行Postman时,我也在不到1秒的时间内看到了结果.我已将此问题隔离到正在运行的 for循环中.数据从SQL中提取,存储为变量,然后for循环对此进行处理并在我的XML请求中运行.

When running Postman I am also seeing results in under 1 second. I have isolated this issue to the for loop I am running. Data is pulled from SQL, stored as a variable then the for loop processes this and runs in my XML request.

我的问题是我的以下代码是否是最佳实践,或者是否还有一种更"pythonian"的方式来完成我要完成的工作?任何指导表示赞赏.

My question is if my following code is best practice or if there is a more 'pythonian' way of completing what I am looking to accomplish? Any guidance is appreciated.

    skill_id=[]
    agent_state=[]
    agent_name=[]

    for db_users in db_results:
       skill_id.append(db_users[0])
       agent_state.append(db_users[1])
       agent_name.append(db_users[2])
    if db_users[1] == 'NOT_READY':
       try:
            cursor.execute(sqlskillgroup)
            skillgroups = []
            for sg_query_result in cursor.fetchall():
                sg = sg_query_result[0]
                skillgroups.append(sg)

    except pyodbc.Error as e:
            print("Error retreiving skill group information from database.")
            quit()

    finally:
        connect.close()
    icm_url = "https://url_of_post"
    xmlfile = open('skill_remove.xml', 'r')
    body = xmlfile.read()
    icm_header = {
           'Content-Type': 'application/xml',
           'Authorization': 'Basic Q2hhZF9NZXllckBhamcuY29tOmNpc2Nv',
           'Cookie': 'JSESSIONID=0C8456E3901DF7A3A0862E17FD50547D'
           }

    for sgid in skillgroups:
        r = requests.request("POST",icm_url, headers = icm_header,
            data = body.format(agent = str(skill_id[0]), skill_urls = '<refURL>/unifiedconfig/config/skillgroup/' + str(sgid) + '</refURL>',),
            verify = r"CAchain.pem",
            cert = (r"cert.cer", r"cert.key"),
            )

    print(r.text)
    print(r.elapsed)


推荐答案

您似乎在计时循环中最后一个请求的经过时间.您可以通过将 r.elapsed 移入循环并汇总每个请求,或通过将循环中的运行总和相加并最后打印,来计算总经过时间.

It looks like you are timing the elapsed time of last request in the loop. You can calculate the total elapsed time by moving r.elapsed into the loop and summing each request, or by adding to a running total in the loop and printing at the end.

total_elapsed = 0
   for sgid in skillgroups:
    r = requests.request("POST",icm_url, headers = icm_header,
            data = body.format(agent = str(skill_id[0]), skill_urls = '<refURL>/unifiedconfig/config/skillgroup/' + str(sgid) + '</refURL>',),
            verify = r"CAchain.pem",
            cert = (r"cert.cer", r"cert.key"),
            )
    total_elapsed += r.elapsed

基本asyncio和aiohttp实现

basic asyncio and aiohttp implementation

import asyncio
import aiohttp

async def post_request(session, url):
    async with session.post() as request: #add your request headers, certificate etc
        await request.status

async def main():
    async with aiohttp.ClientSession() as session: # use client session to auto close at the end
        tasks = []
        for url in urls:
            t = asyncio.create_task(post_request(session, url)) #create a number of tasks to run concurrently
            tasks.append(t)

        await asyncio.gather(*tasks) # wait for all tasks to finish before close the session

asyncio.run(main())

这篇关于有没有办法提高循环速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-23 03:28