问题描述
大家好,
我有一个copyPipeline,它可以将具有不同架构的多个文件从Data Lake复制到Azure SQL数据库中,并且可以按我的要求完美运行,但是主要问题是,当其中一个文件复制到Azure SQL数据库失败时,我没有选择 重新运行它,在解决了失败文件的问题后,我被迫重新运行整个copyPipeline.请注意,copyPipeline会执行每日增量复制并重新运行copyPipeline将导致重复数据进入Azure SQL 成功文件的数据库.
有没有一种方法可以轻松地更好地管理故障,而无需重新运行整个copyPipeline?
此致
Thabo
Hi Guys,
I have a copyPipeline that copies multiple files with different schema from Data Lake into Azure SQL Database and it runs perfectly as i wanted but the main issue is that when one of the file copy to Azure SQL Database fails i dont have an option to rerun it, i'm forced to rerun the entire copyPipeline after resolving the problem for the failed file. Please note that the copyPipeline does an incremental copy daily and re-running copyPipeline will result in duplicates into Azure SQL Database for successfully files.
Is there a way i can easily manages the fails better without rerunning the entire copyPipeline?
Regards,
Thabo
这篇关于将具有不同架构的多个文件从Data Lake复制到Azure SQL数据库-重新运行失败的实体而不是整个CopyPipeline的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!