[Xapian-discuss] Search performance issues and profiling/debugging
search?
Ron Kass
ron at pidgintech.com
Wed Oct 24 03:04:07 BST 2007
Hi Richard & Olly.
Yes, the database is NOT being reopened. The databases are accessed
locally, not over the net. No data is being added to the databases while
testing the searches.
Indeed I found the estimates to be strange. Wasn't sure why, but was
more concerned with performance and the fact that things were not
"warming up".
I don't know anything yet about oprofile, will have to dig deeper there.
(any pinpoints would be handy)
What was not clear from your answers is if it makes sense that second
(and third) time searches take that long. Also, we are getting 15+ and
sometimes 50+ seconds searches on some simple (non phrased) search terms
(like: [map of the world] [Time: 15.703 sec] [Results: 25 of 1355];
[Time: 10.004 sec] [Results: 25 of 1230]; [Results: 25 of 1117]; and
there are ones that take even more time)
I'm attaching few link to outputs of more details debugs.
First I ran a test on all 8 databases (1, 2, 4, 6, 7, 8, 9 and 10)
http://www.pidgintech.com/other/fts/test/test_1-2-4-6-7-8-9-10.txt
This resulted in Segmentation fault.
Then I removed database 10 and reran.
http://www.pidgintech.com/other/fts/test/test_1-2-4-6-7-8-9.txt
This ran successfully. However, Estimates varied: [16044], [17314], [17219]
And overall times were quite long, both for original search and for
sequential searches: [Total Time: 0.96 sec], [Total Time: 0.96 sec],
[Total Time: 0.941 sec]
Then I ran it on each of the databases separately. Note that by this
time it is likely (or at least I though it should be) that the blocks
are cached in memory from the previous test (of 1,2,4,6,7,8 and 9).
http://www.pidgintech.com/other/fts/test/test_isolated_1.txt
Summary for DB #1: Estimates fixed on [3015]; Times did not speed up for
sequential searches: [Total Time: 0.187 sec], [Total Time: 0.183 sec],
[Total Time: 0.199 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_2.txt
Summary for DB #2: Estimates fixed on [789]; Times became faster,
although first time is extremely long considering the single DB and its
size: [Total Time: 0.891 sec], [Total Time: 0.082 sec], [Total Time:
0.083 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_4.txt
Summary for DB #4: Estimates fixed on [3217] ; Times became faster,
although first time is extremely long, and the others are still long
(>0.2sec) considering the single DB: [Total Time: 1.18 sec], [Total
Time: 0.215 sec], [Total Time: 0.212 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_6.txt
Summary for DB #6: Estimates varied [4447], [3023], [3023]; Times were
extremely long in the first two searches, and even in the last one,
still long (>0.2sec): [Total Time: 1.961 sec], [Total Time: 4.299 sec],
[Total Time: 0.204 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_7.txt
Summary for DB #7: Estimates varied [4954], [4954], [3593] ; Times
varied and were relatively long overall: [Total Time: 0.81 sec], [Total
Time: 0.429 sec], [Total Time: 0.779 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_8.txt
Summary for DB #8: Estimates varied [805], [562], [562] ; Times varied
and were relatively long for the first 2 tries: [Total Time: 0.431 sec],
[Total Time: 1.077 sec], [Total Time: 0.063 sec]
http://www.pidgintech.com/other/fts/test/test_isolated_9.txt
Summary for DB #9: Estimates fixed on [3415] ; Times became faster,
although first time is extremely long, and the others are still long
(>0.2sec): [Total Time: 3.137 sec], [Total Time: 0.222 sec], [Total
Time: 0.212 sec]
Now I tied #10 (the one who, when added to a group with add_database,
causes seg fault), just in case.. and, on its own, it worked.
http://www.pidgintech.com/other/fts/test/test_isolated_10.txt
Summary for DB #10: Estimates fixed on [3415] ; Times were overall fast:
[Total Time: 0.108 sec], [Total Time: 0.072 sec], [Total Time: 0.075 sec]
The following output is output of delve on the 8 databases, including
delve on the specific terms (no, recip) to try to figure out why we are
getting segmentation fault when database 10 is combined with the others.
Note that it manages to run through the delve. I wonder where the
segmentation fault is coming from. It happens during the get of the MSET
[root at fts1 maintenance]# delve /fts/Database_1/
number of documents = 20810000
average document length = 204.054
[root at fts1 maintenance]# delve /fts/Database_2/
number of documents = 6020000
average document length = 203.334
[root at fts1 maintenance]# delve /fts/Database_4/
number of documents = 22365000
average document length = 204.573
[root at fts1 maintenance]# delve /fts/Database_6/
number of documents = 21875000
average document length = 204.567
[root at fts1 maintenance]# delve /fts/Database_7/
number of documents = 24480000
average document length = 205.822
[root at fts1 maintenance]# delve /fts/Database_8/
number of documents = 3870000
average document length = 209.001
[root at fts1 maintenance]# delve /fts/Database_9/
number of documents = 23870000
average document length = 205.659
[root at fts1 maintenance]# delve /fts/Database_10/
number of documents = 5325000
average document length = 202.337
[root at fts1 maintenance]# delve /fts/Database_10/ -1 -t Zno | tail -n 5
5324926
5324929
5324952
5324991
5324995
[root at fts1 maintenance]# delve /fts/Database_10/ -1 -t Zrecip | tail
-n 5
5321034
5324077
5324081
5324178
5324179
We have used a very minimal code, stripped of all the "other stuff" we
had in it before. Its simply openning the DB once, executing the search
and timing things.
This is very strange (and frustrating).. I hope I gave enough details.
If not, please let me know.
Since in some cases, on the nodes that show variance in estimates, two
searches show the same estimate while the other is different, it can be
that this variance is on all the nodes if tried enough times.
I think, that speed is overall very poor, especially for sequential
searches, considering it should all be in OS cache.
Last notes.. Olly..
1. Regarding your stable-sort, theory, if its something we can test, let
me know how.
2. What undefined value might anything depend on?
3. I don't know the testsuite and valgrind and what you refer to
regarding that.. I'll have a look into that and into oprofile. If you
have any pinpoints, that would help.
Looking forward to your thoughts..
Ron
Richard Boulton wrote:
> Ron Kass wrote:
>> * Estimates vary, although its exactly the same search done right one
>> after the other with no changes to the DB (no data added). This is
>> not really a big issue.
>
> This is the issue which looks oddest to me, however (though there are
> other oddities). If I understood you correctly, each search is
> performed on the same Database object, without reopening the Database
> object between each search. This should result in exactly the same
> results (and estimates) for each repeated search, since a Database
> object accesses a snapshot of the underlying database. Since this
> isn't happening, there must be something we don't understand about
> your setup, and this is the first thing to resolve.
>
> Are you using the network (ie, "remote") backend? If so, the problem
> could be that some of the database connections are timing out
> occasionally.
>
> Is the database mounted locally, or over a network filing system?
>
> If the database isn't being modified while searches are in progress,
> and/or Database objects aren't being reopened between searches,
> something very weird is happening. Try doing the search on just a
> single database, and see if you get the same effects. If not, add in
> databases gradually until you do.
>
>
> Other than that, it would be useful to separate timings for the search
> (ie, the get_mset() call) from the calls which fetch the document
> data, to see where most of the time is being spent.
>
>
> > 4. Anyone has ideas how we can profile things to see why we have such
> > performance issues? We would be happy to dig deeper to find out where
> > the problems arise from.
>
> oprofile is your best bet for seeing where the time is being spent.
>
Olly Betts wrote:
> On Tue, Oct 23, 2007 at 10:25:47PM +0100, Richard Boulton wrote:
>
>> Ron Kass wrote:
>>
>>> * Estimates vary, although its exactly the same search done right one
>>> after the other with no changes to the DB (no data added). This is not
>>> really a big issue.
>>>
>> This is the issue which looks oddest to me, however (though there are
>> other oddities).
>>
>
> It would be interesting to work out why this is happening. I suspect
> doing so will also reveal why the timings vary.
>
>
>> If I understood you correctly, each search is
>> performed on the same Database object, without reopening the Database
>> object between each search. This should result in exactly the same
>> results (and estimates) for each repeated search, since a Database
>> object accesses a snapshot of the underlying database.
>>
>
> I have a theory. We sort containers of postlists when building the
> postlist tree from the query. If we don't always use a stable sort,
> then the exact tree built may vary between runs, which I think could
> easily affect both the estimated number of matches and also how much
> work the query requires (and so how long it takes).
>
> If this is happening, we should see if using a stable sort for such
> cases imposes any overhead. I suspect it wouldn't since this is a
> one-off cost per query, and we're sorting a small number of items
> (at most the number of terms in the query).
>
> Another possibility is that there's some dependence on an undefined
> value. The testsuite is clean under valgrind, but I'm sure there's
> code which doesn't have testsuite coverage.
>
> Ron: assuming this is on Linux, if you run a few of these varying cases
> under valgrind, does it report any problems?
>
> Cheers,
> Olly
>
More information about the Xapian-discuss
mailing list