本文介绍了如何在describe()的before()块中动态生成Mocha测试?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个mocha测试套件,该套件正在测试nwjs应用程序正在调用的命令行实用程序,该实用程序接收文件并生成输出json文件.我有成千上万个输入文件的组合,我要生成的测试(it()s)取决于cmdline实用程序的json输出内容.

I am creating a mocha test suite that is testing a command line utility that our nwjs app is calling which takes files and produces an output json file. I have thousands of combinations of input files and my tests (it()s) that I want to generate depend on the contents of the json output from the cmdline utility.

Mocha似乎要我先创建所有it(),但这意味着这些脚本需要先运行并捕获json输出.我希望这样做:

Mocha seems to want to require me to create all of the it()s upfront, but that means these scripts need to be run upfront and the json output captured. I was hoping to do:

'use strict';
const path = require('path');
const glob = require('glob');
const expect = require('sharedjs/chai-wrapper').expect;
const utils = require('sharedjs/utils');

describe('Generated Tests:', function() {
  let testNum = 0;
  let globOpts = { nodir: true }
  let type1files = glob.sync(path.join(filetype1_dir, '*'), globOpts);
  let type2files = glob.sync(path.join(filetype2_dir, '*'), globOpts);
  for (let i = 0; i < type1files.length; i++) {
    for (let j = 0; j < type2files.length; j++) {
      testNum++;
      let testName = utils.mkTestName(testNum, i, j);

      describe(testName, function() {
        let run;
        before(function() {
          run = utils.runCommand(type1files[i], type2files[j]);
          // run = { status: result.status, command: result.args.join(' '), output: fse.readJsonSync(outfile) }
          if (run.status !== 0) {
            throw new Error(run.status+'='+run.command);
          }
        });

        for (let key in run.output.analysis) {
          it(key+'=0', function() {
            expect(run.output.analysis[key].value).to.be.equal('0', key+'=0');
          }
        }
      });
    }
  }
});

我将在这里进行数千个命令行调用.我不想让它们全都放在前面,缓存文件(或更糟糕的是,将所有json对象加载到内存中)然后开始运行测试.

I'll be making thousands of command line calls here. I don't want to make them all up front, cache the files (or worse, have all of the json objects loaded into memory) and then start running the tests.

我知道我可以创建一个高级的"validate json"测试,然后在其中进行一堆Expect(),但是这样做有两个问题.首先,它们不会是显示为失败的独立命名测试,其次,第一个期望失败将使测试失败,因此我无法进一步了解json后面的其他错误.

I know that I can create a high level "validate json" test and then just do a bunch of expect()'s in there but there are two problems with that. First, they wouldn't be independent named tests shown as failures and second, the first expect failure will fail the test so I won't have visibility to other errors further down the json.

想法?

-使用utils.runCommand()的示例JSON输出进行更新-

-- UPDATED WITH SAMPLE JSON OUTPUT FROM utils.runCommand() --

{
    data1: { ... },
    data2: { ... },
    analysis: {
        dynamicKey1: <analysisObj>,
        dynamicKey...: <analysisObj>,
        dynamicKeyN: <analysisObj>
    }
}

分析中的键取决于输入的数据类型,并且存在很多可能性.动态键的名称可以在运行之间更改.从测试的角度来看,我对密钥的名称不感兴趣,但是它的analysisObj是一致的.例如,如果我将相同的data1和data2传递给utils.runCommand(),则代表两个对象之间的差额的analysisObj的部分在整体上应该为零.

The keys in the analysis are dependent on the type of data that is entered and there are a large number of possibilities. The name of the dynamic keys can change from run to run. From a testing perspective, I am not interested in the name of the key, but that it's analysisObj is conformant. For example, if I pass in identical data1 and data2 to utils.runCommand(), then the portion of the analysisObj that represents the delta between the two should be zero across the board.

直到运行脚本并且如果我正在运行100,000个测试之后,我才获得analysisObjs,我不需要预先运行所有这些程序或将它们全部预加载到内存或文件系统中.

I don't get the analysisObjs until after I run the script and if I'm running 100,000 tests, I don't want to have to pre-run or pre-load all of this into memory or a filesystem.

推荐答案

我要感谢@JoshLee向我指出了一些有用的研究路径.

I want to thank @JoshLee for pointing me down some helpful research paths.

在查看了摩卡代码之后,主要关注:

After looking at the mocha code, focusing mainly on:

我了解到

  1. describe()调用返回一个Suite对象
  2. Suite对象包含要运行的测试(suite.tests)
  3. 运行套件的before()时,未查看测试
  4. 我可以使用suite.addTest()在before()方法中添加任意数量的测试,并且它们都将运行
  5. 最重要的是,我的utils.runCommand()仅在每个测试套件的开始运行,并且每个测试套件都按顺序运行.(以前我添加的测试将在所有最初的describe块都运行一次之后进行)

输出符合预期,结果反映了正确的测试次数.我使用 mochawesome 运行了自动生成的50,000多个测试,这些测试在1980年的测试套件中分布不均对于记者来说,效果很好.

The output is as expected and the results reflect the proper number of tests. I've run this auto-generating a little over 50,000 tests spread unevenly across 1980 test suites using mochawesome for the reporter and it worked great.

要实现这一目标,需要执行以下5个步骤,这些步骤在下面的更新代码段中进行了说明.

There are 5 steps required to pull this off described in the updated code snippet below.

'use strict';
const path = require('path');
const glob = require('glob');
const expect = require('sharedjs/chai-wrapper').expect;
const utils = require('sharedjs/utils');

// Step 1: Pull in Test class directly from mocha
const Test = require('mocha/lib/test');

// Step 2: Simulates it() from mocha/lib/interfaces/bdd.js
//   I ignore the isPending() check from bdd.js. I don't know
//   if ignoring it is required, but I didn't see a need to add
//   it for my case to work
function addTest(suite, title, fn) {
  let test = new Test(title, fn);
  test.file = __filename;
  suite.addTest(test);
  return test;
}

let testNum = 0;
let globOpts = { nodir: true }
let type1files = glob.sync(path.join(filetype1_dir, '*'), globOpts);
let type2files = glob.sync(path.join(filetype2_dir, '*'), globOpts);
for (let i = 0; i < type1files.length; i++) {
  for (let j = 0; j < type2files.length; j++) {
    testNum++;
    let testName = utils.mkTestName(testNum, i, j);

    // Step 3: Save the suite object so that we can add tests to it.
    let suite = describe(testName, function() {
      let run;
      before(function() {
        run = utils.runCommand(type1files[i], type2files[j]);
        // run = { status: result.status, command: result.args.join(' '),
        //         output: fse.readJsonSync(outfile) }
        if (run.status !== 0) {
          throw new Error(run.status+'='+run.command);
        }

        for (let key in run.output.analysis) {
          // Step 4: Dynamically add tests
          //   suite is defined at this point since before() is always
          //   run after describe() returns.
          addTest(suite, key+'=0', function() {
            expect(run.output.analysis[key].value).to.be.equal('0', key+'=0');
          });
        }
      });
    });

    // Step 5: Add dummy test in describe() block so that it will be run.
    //   Can be it() for a pass result or it.skip() for pending.
    it('Placeholder for ' + testName, function () {
      expect(true).to.be.true;
    });
  }
}

这篇关于如何在describe()的before()块中动态生成Mocha测试?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 06:12