Mongodb shutting down -
i have problem mongodb shutting down. throwing segmentation fault , shutting down. error log given below. suggest causing error.
wed may 11 12:50:53 db version v1.6.5, pdfile version 4.5 wed may 11 12:50:53 git version: 0eb017e9b2828155a67c5612183337b89e12e291 wed may 11 12:50:53 sys info: linux domu-12-31-39-01-70-b4 2.6.21.7-2.fc8xen #1 smp fri feb 15 12:39:36 est 2008 i686 boost_lib_version=1_37 wed may 11 12:50:53 [initandlisten] waiting connections on port 27017 wed may 11 12:50:53 [websvr] web admin interface listening on port 28017 wed may 11 12:51:03 [initandlisten] connection accepted 127.0.0.1:36745 #1 wed may 11 12:51:03 [conn1] end connection 127.0.0.1:36745 wed may 11 12:51:05 [initandlisten] connection accepted 127.0.0.1:36747 #2 wed may 11 12:51:05 [conn2] end connection 127.0.0.1:36747 wed may 11 12:51:05 [initandlisten] connection accepted 127.0.0.1:36748 #3 wed may 11 12:51:05 [conn3] error: have index [twitter.home_timeline.$aves_user_id_1] no namespacedetails wed may 11 12:51:05 [conn3] end connection 127.0.0.1:36748 wed may 11 12:51:09 [initandlisten] connection accepted 127.0.0.1:36752 #4 wed may 11 12:51:09 [conn4] end connection 127.0.0.1:36752 wed may 11 12:51:10 [initandlisten] connection accepted 127.0.0.1:36753 #5 wed may 11 12:51:10 [conn5] dropdatabase twitter wed may 11 12:51:10 [conn5] query twitter.$cmd ntoreturn:1 command: { dropdatabase: 1 } reslen:74 113ms wed may 11 12:51:10 [conn5] end connection 127.0.0.1:36753 wed may 11 12:51:10 [initandlisten] connection accepted 127.0.0.1:36754 #6 wed may 11 12:51:11 [conn6] end connection 127.0.0.1:36754 wed may 11 12:51:17 [initandlisten] connection accepted 127.0.0.1:36755 #7 wed may 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.ns, filling zeroes... wed may 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.ns, size: 16mb, took 0 secs wed may 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.0, filling zeroes... wed may 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.0, size: 64mb, took 0 secs wed may 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.1, filling zeroes... wed may 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.1, size: 128mb, took 0 secs wed may 11 12:51:17 [conn7] building new index on { _id: 1 } twitter.home_timeline wed may 11 12:51:17 [conn7] done 0 records 0secs wed may 11 12:51:20 allocating new datafile /home/lakesh/mongodb/data/twitter.2, filling zeroes... wed may 11 12:51:20 done allocating datafile /home/lakesh/mongodb/data/twitter.2, size: 256mb, took 0 secs wed may 11 12:51:21 [conn7] building new index on { _id: 1 } twitter.direct_messages wed may 11 12:51:21 [conn7] done 0 records 0secs wed may 11 12:51:21 [conn7] info: creating collection twitter.direct_messages on add index building new index on { _id: 1 } twitter.hash_tags wed may 11 12:51:21 [conn7] done 0 records 0secs wed may 11 12:51:21 [conn7] info: creating collection twitter.hash_tags on add index building new index on { _id: 1 } twitter.mentions wed may 11 12:51:21 [conn7] done 0 records 0secs wed may 11 12:51:21 [conn7] info: creating collection twitter.mentions on add index building new index on { _id: 1 } twitter.urls wed may 11 12:51:21 [conn7] done 0 records 0secs wed may 11 12:51:21 [conn7] info: creating collection twitter.urls on add index building new index on { aves_user_id: 1.0 } twitter.home_timeline wed may 11 12:51:22 got signal: 11 (segmentation fault). wed may 11 12:51:22 backtrace: 0x84a7552 0xb7730400 0x8102d3e 0x8201dfc 0x820387e 0x83dbf63 0x83874ec 0x8388efd 0x838e3f8 0x839025a 0x8367ad2 0x836998b 0x84a5793 0x81cd468 0x84bf1bd 0xb75d6cc9 0xb75436ae ./mongod(_zn5mongo10abruptquitei+0x3c2) [0x84a7552] [0xb7730400] ./mongod(_znk5mongo7bsonobj21getfielddottedorarrayerpkc+0xae) [0x8102d3e] ./mongod(_znk5mongo9indexspec8_getkeysest6vectoripkcsais3_ees1_ins_11bsonelementesais6_eerkns_7bsonobjerst3setis9_ns_22bsonobjcmpdefaultorderesais9_ee+0x8c) [0x8201dfc] ./mongod(_znk5mongo9indexspec7getkeyserkns_7bsonobjerst3setis1_ns_22bsonobjcmpdefaultorderesais1_ee+0x24e) [0x820387e] ./mongod(_znk5mongo12indexdetails17getkeysfromobjecterkns_7bsonobjerst3setis1_ns_22bsonobjcmpdefaultorderesais1_ee+0x33) [0x83dbf63] ./mongod(_zn5mongo14fastbuildindexepkcpns_16namespacedetailserns_12indexdetailsei+0x69c) [0x83874ec] ./mongod() [0x8388efd] ./mongod(_zn5mongo11datafilemgr6insertepkcpkvibrkns_11bsonelementeb+0xbc8) [0x838e3f8] ./mongod(_zn5mongo11datafilemgr16insertwithobjmodepkcrns_7bsonobjeb+0x6a) [0x839025a] ./mongod(_zn5mongo14receivedinserterns_7messageerns_5curope+0x3a2) [0x8367ad2] ./mongod(_zn5mongo16assembleresponseerns_7messageerns_10dbresponseerkns_8sockaddre+0x19bb) [0x836998b] ./mongod(_zn5mongo10connthreadepns_13messagingporte+0x313) [0x84a5793] ./mongod(_zn5boost6detail11thread_datains_3_bi6bind_tivpfvpn5mongo13messagingporteens2_5list1ins2_5valueis6_eeeeeee3runev+0x18) [0x81cd468] ./mongod(thread_proxy+0x7d) [0x84bf1bd] /lib/libpthread.so.0(+0x5cc9) [0xb75d6cc9] /lib/libc.so.6(clone+0x5e) [0xb75436ae] wed may 11 12:51:22 dbexit: wed may 11 12:51:22 [conn7] shutdown: going close listening sockets... wed may 11 12:51:22 [conn7] closing listening socket: 5 wed may 11 12:51:22 [conn7] closing listening socket: 6 wed may 11 12:51:22 [conn7] closing listening socket: 7 wed may 11 12:51:22 [conn7] closing listening socket: 8 wed may 11 12:51:22 [conn7] shutdown: going flush oplog... wed may 11 12:51:22 [conn7] shutdown: going close sockets... wed may 11 12:51:22 [conn7] shutdown: waiting fs preallocator... wed may 11 12:51:22 [conn7] shutdown: closing files... wed may 11 12:51:22 closeallfiles() finished wed may 11 12:51:22 [conn7] shutdown: removing fs lock... wed may 11 12:51:22 dbexit: exiting wed may 11 12:51:22 error: client::~client _context should null not; client:conn
here 2 options can think of:
start local server slave remote master. once data replicated on local, shut down , bring regular (master).
start local server. use
db.copydatabase()
ordb.clonedatabase()
api command line client copy database remote server local server.
please give these try - positive you'll see progress.
Comments
Post a Comment