[fpc-pascal] msedb and fcl-db benchmarks
Joost van der Sluis
joost at cnoc.nl
Tue Jul 17 23:58:39 CEST 2007
On Tue, 2007-07-17 at 21:27 +0200, Coco Pascal wrote:
> Joost van der Sluis wrote:
> > On Tue, 2007-07-17 at 19:58 +0200, Coco Pascal wrote:
> >> Joost van der Sluis wrote:
> >>> Discussion: What tests could I do more? Is there something I overlooked?
> >> To me it seems that benchmark tests on 100000 records are missing
> >> relevance more and more.
> > Offcourse, but it has some usefull results.
> >> I'm interested in responsiveness in n-tier solutions: opening connection
> >> - begin transaction - quering/updating mostly 1 or a few records -
> >> commit transaction - close connection - browsing a small (<50) set of
> >> records . Opening /closing connections could be skipped in this context
> >> when a connectionpool is used.
> > I did those tests quickly. There wasn't any difference between the two.
> > But compared to time the open/close takes, you can't measure the browse
> > speed if you use only 50 records. So to test the browse speed, I simply
> > used more records. (Also not fool-proof, but that wasn't my intention)
> So this makes your case, doesn't it. Martin argued that his dataset was
> designed to work together with his visual controls.
Yes, but it's an important part, because exactly this is the part that
has changed. Except from this and the widestring-difference, there are
no real differences. So I'm very curious know why his dataset works
better with his visual controls.
I want to know this, so we can remove the need for two times the same
component. Or that's at least what I hope for.
> Because it makes no sense to use large datasets with those controls, one
> could argue that performance can't be relevant in this case.
Oh, that could be really true. But if there are no performance, and no
other difference. Why would you allow the code-duplication?
> >> Also I'm interested in tests selecting/updating/browsing sets larger
> >> than 1 million records, obviously local.
> > That's simple, adapt the amount of records that is created, and then
> > call the edit-field tests.
> >> Consequently one could ask if one type of dataset could satisfy
> >> requirements regarding performance and use of resources in both cases.
> > I think you can. Unless you want to edit all 1 million records. (as I
> > told in my last message)
> > It becomes different if you only need to browse one way through the
> > records. (or, obviously, you have a dataset with only one record as
> > result)
> > Or can you explain to me how you can make a dataset faster for usage
> > with more records, and at the same time slow it down for low
> > recordcounts? (or the other way around)
> I'm refering to very large buffered datasets held in memory by
> middleware for fast access. It is well known for instance that Delphi's
> TClientDataset chokes with say 100000 records, whereas different designs
> scale much better, TkbmMemTable for instance. Not long ago Marco van der
> Voort suggested (in another forum) to have a look at something like his
> "lightcontainers", apparently someting completely different (than for
> instance a TList based dataset).
> I've put forward this point because for datasets used with visual
> controls speeds can't be the issue at all. But for very large sets it
> certainly is.
Well, that's exactly the difference between mse's bufdataset and the
fcl-db one. Tmsebufdataset is based on a TList, TBufDataset isn't. And I
believe that at TList based dataset is always slower. (although it could
save some memory)
But I need to do some new tests, to take the widestrings into account.
And I have to implement a utf8<->ansistring conversion in TBufDataset.
(Or somewhere else, I think that this conversion isn't the work of
TBufDataset. Also because some db-engines has conversion-commands build
into it. This has to be done on the TConnection-level.)
More information about the fpc-pascal