Although the talk was interesting, I wasn't entirely convinced by some of the points made.
The idea of automating acceptance testing seems extremely valuable. Acceptance testing is a chore and in reality the customer often fails to do it or to do it effectively. Indeed, in the worst cases the developer relies on the customer failing to do the acceptance testing properly! So, providing an automated tool for doing this testing allows the customer to fulfil their role with the minimum of effort, and provides the developers with a tool to guarantee that their code is up to the test, whenever they want to check it.
The distinction between unit testing and acceptance testing is a little ambiguous. Unit testing is distinguished in that it should be testing smaller units very quickly so that it can be performed as part of every compilation/build cycle. Acceptance testing should be testing the entire package (including the interface) to ensure that everything works together in the way that the customer expects. Acceptance tests do not have to be carried out all the time so they can take longer to perform. In reality a quick test that exercises the whole application would be a useful unit test, and automated acceptance testing is not always easy to do, so it is sometimes necessary to exercise units in isolation: perhaps bypassing the interface, or directly injecting data to provide as test input.
So one of the main advantages of FIT and Exactor is that they present the tests in a form that is more user-readable than your average bit of unit testing code. This allows customers to see and understand the tests as well as seeing and understanding the test results. In theory it also allows customers to write tests, but it was acknowledged that this rarely happens in practice.
Despite some of the ways in which the information was presented, it would appear that these acceptance testing tools are basically layered on top of jUnit, with Exactor being an abstraction of FIT to provide plain text entry of tests rather than using HTML tables as FIT requires.
My main concern is that this solution fails to acknowledge reality. If customers are not really going to write tests themselves, but may do so with the help and assistance of professional testers or developers, then it makes more sense to provide tools which automatically present automated tests and their results in a user friendly way, rather than asking programmers to use yet another entry syntax to code the tests. Having said this, I can see genuine value in the way that these tools enable customers to subtly amend existing tests directly.
Ultimately I think that Apple's scripting technologies provide a nice analogy that is worth looking into.
For many years Apple has provided end users with a scripting language called AppleScript which was designed to allow users (not programmers) to script and automated repeated or complex tasks. The language had a very plain English feel to it that made it easy to understand existing scripts, but gave the illusion that you could write scripts by writing plain English when in reality it was just as demanding of syntax and structure and was very bad at explaining your coding errors when you went wrong.
Apple is just about to release a new version of their operating system (Mac OS 10.4, code-named 'Tiger') which contains a new application called "Automator". This provides a GUI for building scripts which allows you to drag and drop linked operations in such a way that the interface will only allow you to link compatible operations and will only allow you to link them in a suitable order. It doesn't stop you doing pointless or stupid things, but it ensures that you produce a 'script' which works.
Automated Acceptance Testing involving the customer directly will really come into its own when the developer community spends the considerable time involved in making a friendly user interface that will hand-hold customers through the process and allow them to create and edit tests without having to understand any kind of syntax at all.
I almost forgot to mention that there was a reference made to Selenium - a tool for automated testing of web applications directly through a range of browsers. I haven't had time to look into how the tests are defined, but it looks very useful in principle.