问题描述
是否可以遍历子文件夹,所以不是每小时处理一次,我们可以处理一天的数据。它直接在"CopyData"中做。函数因为有一个recursivley处理文件的选项但是在数据流
中该选项不可用
Is it possible to iterate through sub-folders, so rather than processing hourly - we can process a days worth of data. Its straight forward to do in a "CopyData" function as there is an option to recursivley process files however in the data flow that option isn't available
我尝试过以下但是它不读取每日文件夹
I have tried the below however it doesn't read through the daily folders
@ concat('ping /',formatDateTime(utcnow(),'yyyy'),'/',formatDateTime(utcnow(),'MM'),'/',formatDateTime(utcnow (),'dd'))
谢谢
推荐答案
您可以使用通配符路径,它将处理与模式匹配的所有文件。但所有文件应遵循相同的模式。
You can use wildcard path, it will process all the files which match the pattern. But all the files should follow the same schema.
例如, / ** / movies.csv 将匹配子文件夹中的所有movies.csv文件。要使用通配符路径,您需要在数据集中正确设置容器。并根据相对路径设置通配符路径。
For example, /**/movies.csv will match all the movies.csv file in the sub folders. To use wildcard path, you need to set the container correctly in the dataset. And set the wildcard path based on the relative path.
谢谢
明阳
这篇关于数据流动态文件名 - 递归处理文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!