In our last post we built a simple index over file system. While our example works fine but cannot be extended over clustered environment and also cannot be used for a large document because of memory foot print. Lucene doesn’t provide a direct in built JDBC interface but Compass does, though the JDBC interface of Compass is not compatible with Lucene 3.6. We will extend Compass JDBC interface as per Lucene 3.6 changes.
First we will create our Maven Project in Eclipse and add required dependencies.
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mumz.test.lucene</groupId> <artifactId>ApacheLuceneTest</artifactId> <version>0.0.1-SNAPSHOT</version> <name>ApacheLuceneTest</name> <description>ApacheLuceneTest</description> <dependencies> <dependency> <artifactId>lucene-core</artifactId> <groupId>org.apache.lucene</groupId> <type>jar</type> <version>3.6.1</version> </dependency> <dependency> <groupId>org.compass-project</groupId> <artifactId>compass</artifactId> <version>2.2.0</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.21</version> </dependency> </dependencies> </project>
Second since JdbcDirectory
provided with Compass 2.2.0 doesn’t implement all the methods defined in abstract class Directory
, we will provide our own implementation which will extend JdbcDirectory
.
JdbcDirectory
package com.mumz.test.lucene.jdbc; import java.io.IOException; import javax.sql.DataSource; import org.apache.lucene.store.jdbc.JdbcDirectory; import org.apache.lucene.store.jdbc.JdbcDirectorySettings; import org.apache.lucene.store.jdbc.JdbcStoreException; import org.apache.lucene.store.jdbc.dialect.Dialect; import org.apache.lucene.store.jdbc.support.JdbcTable; /** * The Class MyJDBCDirectory. * * @author prabhat.jha */ public class MyJDBCDirectory extends JdbcDirectory { /** * Instantiates a new my jdbc directory. * * @param dataSource * the data source * @param dialect * the dialect * @param settings * the settings * @param tableName * the table name */ public MyJDBCDirectory(DataSource dataSource, Dialect dialect, JdbcDirectorySettings settings, String tableName) { super(dataSource, dialect, settings, tableName); } /** * Instantiates a new my jdbc directory. * * @param dataSource the data source * @param dialect the dialect * @param tableName the table name */ public MyJDBCDirectory(DataSource dataSource, Dialect dialect, String tableName) { super(dataSource, dialect, tableName); } /** * Instantiates a new my jdbc directory. * * @param dataSource the data source * @param settings the settings * @param tableName the table name * @throws JdbcStoreException the jdbc store exception */ public MyJDBCDirectory(DataSource dataSource, JdbcDirectorySettings settings, String tableName) throws JdbcStoreException { super(dataSource, settings, tableName); } /** * Instantiates a new my jdbc directory. * * @param dataSource the data source * @param table the table */ public MyJDBCDirectory(DataSource dataSource, JdbcTable table) { super(dataSource, table); } /** * Instantiates a new my jdbc directory. * * @param dataSource the data source * @param tableName the table name * @throws JdbcStoreException the jdbc store exception */ public MyJDBCDirectory(DataSource dataSource, String tableName) throws JdbcStoreException { super(dataSource, tableName); } /** * (non-Javadoc). * * @return the string[] * @throws IOException Signals that an I/O exception has occurred. * @see org.apache.lucene.store.Directory#listAll() */ @Override public String[] listAll() throws IOException { return super.list(); } }
JdbcDirectory has a predefined table structure where it works on a clob field.
Third we will write our Indexer code. In our code we will achieve following:
- Add few records in our database by using
JDBCBatchInsert.java
- Create table to be used by
JdbcDirectory
usingcreateIndexTable
method - Build an index on this record by using
index
method
Let’s add some records in our database for testing our application.
JDBCBatchInsert.java
package com.mumz.test.lucene.jdbc; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; /** * The Class JDBCBatchInsert. * @author prabhat.jha */ public class JDBCBatchInsert { /** The Constant QUERY. */ private static final String QUERY = "INSERT INTO BOOKS (BOOK_ID, BOOK_NAME, BOOK_AUTHOR, BOOK_PUBLISHER) VALUES (?, ?, ?, ?)"; /** The Constant BOOK_FIRST_PART. */ private final static String[] BOOK_FIRST_PART = {"Spring", "Hibernate", "Lucene", "Mahout", "JPA", "JSF", "Swing", "Hadoop", "Hbase"}; /** The Constant BOOK_LAST_PART. */ private final static String[] BOOK_LAST_PART = {"In Action", "Complete Reference", "Demystified", "Tutorial", "Explained", "Simplified", "Bible", "Cook Book", "Crash Course"}; /** The Constant BLANK_SPACE. */ private final static String BLANK_SPACE = " "; /** * Insert records. */ public void insertRecords() { Connection connection = null; PreparedStatement pstmt = null; ResultSet resultSet = null; try { connection = JDBCDatabaseUtil.getConnection(); pstmt = connection.prepareStatement(QUERY); int index = 0; for (String firstPart : BOOK_FIRST_PART) { for (String lastPart : BOOK_LAST_PART) { pstmt.setInt(1, ++index); pstmt.setString(2, firstPart + BLANK_SPACE + lastPart); pstmt.setString(3, "Test Author" + BLANK_SPACE + firstPart + BLANK_SPACE + lastPart + index); pstmt.setString(4, "Test Publisher" + BLANK_SPACE + firstPart + BLANK_SPACE + lastPart + index); pstmt.addBatch(); } } pstmt.executeBatch(); } catch (Exception e) { e.printStackTrace(); } finally { try { if (resultSet != null) { resultSet.close(); } if (pstmt != null) { pstmt.close(); } if (connection != null) { connection.close(); } resultSet = null; pstmt = null; connection = null; } catch (SQLException e) { e.printStackTrace(); } } } }
Next we will add index in our database.
JDBCIndexer.java
package com.mumz.test.lucene.jdbc; import java.io.IOException; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.SimpleAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.store.Directory; import org.apache.lucene.store.LockObtainFailedException; import org.apache.lucene.store.jdbc.JdbcDirectory; import org.apache.lucene.store.jdbc.dialect.MySQLDialect; import org.apache.lucene.util.Version; /** * The Class JDBCIndexer. * * @author prabhat.jha */ public class JDBCIndexer { /** The jdbc directory. */ private Directory jdbcDirectory = null; /** * Instantiates a new jDBC indexer. * * @param jdbcDirectory * the jdbc directory */ public JDBCIndexer(Directory jdbcDirectory) { super(); this.jdbcDirectory = jdbcDirectory; } /** * Gets the jdbc directory. * * @return the jdbc directory */ public Directory getJdbcDirectory() { if (jdbcDirectory == null) { throw new IllegalStateException("Index not yet build, rerun indexing"); } return jdbcDirectory; } /** * Sets the jdbc directory. * * @param jdbcDirectory * the new jdbc directory */ public void setJdbcDirectory(Directory jdbcDirectory) { this.jdbcDirectory = jdbcDirectory; } /** * Builds the index. */ public void buildIndex() { createAndBuildIndex(); } /** * Creates the and build index. */ private void createAndBuildIndex() { createIndexTable(); index(); } /** * Index. */ private void index() { Analyzer analyzer = new SimpleAnalyzer(Version.LUCENE_36); IndexWriterConfig indexWriterConfig = new IndexWriterConfig(Version.LUCENE_36, analyzer); IndexWriter indexWriter = null; try { indexWriter = new IndexWriter(getJdbcDirectory(), indexWriterConfig); addIndex(indexWriter); } catch (CorruptIndexException e) { e.printStackTrace(); } catch (LockObtainFailedException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (indexWriter != null) { try { indexWriter.close(); } catch (CorruptIndexException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { indexWriter = null; } } } } /** * Add index on records present in BOOKS table * * @param indexWriter * the index writer */ private void addIndex(IndexWriter indexWriter) { try { Connection connection = JDBCDatabaseUtil.getConnection(); String query = "SELECT BOOK_ID, BOOK_NAME, BOOK_AUTHOR, BOOK_PUBLISHER FROM BOOKS"; PreparedStatement pstmt = connection.prepareStatement(query); ResultSet resultSet = pstmt.executeQuery(); while (resultSet.next()) { Document document = new Document(); document.add(new Field("name", String.valueOf(resultSet.getString(2)), Field.Store.YES, Field.Index.ANALYZED)); document.add(new Field("author", String.valueOf(resultSet.getString(3)), Field.Store.YES, Field.Index.ANALYZED)); document.add(new Field("publisher", String.valueOf(resultSet.getString(4)), Field.Store.YES, Field.Index.ANALYZED)); indexWriter.addDocument(document); } } catch (SQLException e) { e.printStackTrace(); } } /** * Creates the index table. */ private void createIndexTable() { if (this.jdbcDirectory == null) { setJdbcDirectory(new MyJDBCDirectory(JDBCDatabaseUtil.getDataSource(), new MySQLDialect(), "LUCENE_INDEX_TABLE")); } try { /** * No need to manually create index table, create method will * automatically create it. */ ((JdbcDirectory) getJdbcDirectory()).create(); } catch (IOException e) { e.printStackTrace(); } } }
Last we have used getDataSource and getConnection methods while inserting records and building index, these two methods are utility methods part of our database util.
JDBCDatabaseUtil.java
package com.mumz.test.lucene.jdbc; import java.sql.Connection; import java.sql.SQLException; import javax.sql.DataSource; import com.mysql.jdbc.jdbc2.optional.MysqlDataSource; /** * The Class JDBCDatabaseUtil. * @author prabhat.jha */ public class JDBCDatabaseUtil { /** * Gets the data source. * * @return the data source */ public static DataSource getDataSource() { MysqlDataSource dataSource = new MysqlDataSource(); dataSource.setUser("root"); dataSource.setPassword("root"); dataSource.setUrl("jdbc:mysql://localhost:3306/search_schema?emulateLocators=true&useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false"); return dataSource; } /** * Gets the connection. * * @return the connection * @throws SQLException * the sQL exception */ public static Connection getConnection() throws SQLException { return getDataSource().getConnection(); } }
Finally sql script to create our database schema which will be used in this tutorial.
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; DROP SCHEMA IF EXISTS `search_schema` ; CREATE SCHEMA IF NOT EXISTS `search_schema` DEFAULT CHARACTER SET utf8 ; USE `search_schema` ; -- ----------------------------------------------------- -- Table `search_schema`.`books` -- ----------------------------------------------------- DROP TABLE IF EXISTS `search_schema`.`books` ; CREATE TABLE IF NOT EXISTS `search_schema`.`books` ( `BOOK_ID` INT(11) NOT NULL AUTO_INCREMENT , `BOOK_NAME` VARCHAR(45) NOT NULL , `BOOK_AUTHOR` VARCHAR(45) NOT NULL , `BOOK_PUBLISHER` VARCHAR(45) NOT NULL , PRIMARY KEY (`BOOK_ID`) ) ENGINE = InnoDB AUTO_INCREMENT = 82 DEFAULT CHARACTER SET = utf8; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
Hi,
Thank you for your excellent article. I just wanna to ask you, how the performance compared to regular FS directory ?
Thanks.
Performance is comparable, more or less once your index is created searching wont take huge time. Definitely when dealing with database you will have some IO hits but in my experience I have not seen too bad a performance. Hibernate search uses lucene underneath which you can look at.
Hi,
Im getting this error
Exception in thread “main” java.lang.AbstractMethodError: org.apache.lucene.store.Directory.listAll()[Ljava/lang/String;
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:602)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
at org.apache.lucene.index.IndexReader.indexExists(IndexReader.java:1099)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1112)
at org.karsha.lucene.jdbc.JDBCIndexer.index(JDBCIndexer.java:101)
at org.karsha.lucene.jdbc.JDBCIndexer.createAndBuildIndex(JDBCIndexer.java:86)
at org.karsha.lucene.jdbc.JDBCIndexer.buildIndex(JDBCIndexer.java:78)
at org.karsha.lucene.jdbc.App.main(App.java:14)
Any workaround to overcome this?
Thanks
Found the solution -instead of using org.apache.lucene.store.Directory, it should be using the MyJDBCDirectory right? that’s the purpose of creating the right? this solves the run time error.
Yeah correct glad you got it working
Jdbc directory is not compatible with latest version of lucene hence we have to override the default JdbcDirectory
Index is not inserted in to the database- here’s the correction for “addIndex method” http://paste.org/59500
Yeah true, I think while moving from Eclipse to WordPress I had copied an older version or something, updated code snippets. Thanks
Can the same work with Lucene 4.0
I am not completely sure will have to check
Can you please let me know by checking it ? or Do we have some open source to have a similar functionality
Okay, I had a look at Lucene 4.0.0 and compass 2.2.0. Both are not compatible and there are significant changes in Lucene 4, I will highligh a few with respect to Compass
Fixing them all may require significant amount of time and effort, though I will try to get something working.
I am not sure if compass if working on lastest development, you can take a look at HibernateSearch which also uses Lucene underneath
Hi Prabhat
I’m getting some exceptions while adding the document to the index, I have posted my question here, Please check if you can give some answer -http://stackoverflow.com/questions/14356935/exception-thrown-when-try-to-add-documents-to-the-lucene-index-continuously-insi
That error seems to be originating from your database, I just reran this whole example (even recreated my schema) and it worked perfectly fine, I have also left an answer on StackOverflow, check if that helps.
Thank you for the excellent tutorial. I was able to use it to make Hibernate Search use a JdbcDirectory for storing Lucene indices. This required implementing a JdbcDirectoryProvider, but your post helped.
Hi Prabhat
I’m using lucene version 4.0.0 and compass 2.2.0 to store lucene index in Mysql.
Am getting the following error and am stuck with that.
Exception in thread “main” java.lang.NoSuchMethodError: org.apache.lucene.store.IndexInput: method ()V not found
at org.apache.lucene.store.jdbc.handler.NoOpFileEntryHandler$NoOpIndexInput.(NoOpFileEntryHandler.java:34)
at org.apache.lucene.store.jdbc.handler.NoOpFileEntryHandler$NoOpIndexInput.(NoOpFileEntryHandler.java:34)
at org.apache.lucene.store.jdbc.handler.NoOpFileEntryHandler.(NoOpFileEntryHandler.java:86)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at org.apache.lucene.store.jdbc.JdbcFileEntrySettings.getSettingAsClass(JdbcFileEntrySettings.java:175)
at org.apache.lucene.store.jdbc.JdbcFileEntrySettings.getSettingAsClass(JdbcFileEntrySettings.java:157)
at org.apache.lucene.store.jdbc.JdbcDirectory.initialize(JdbcDirectory.java:173)
at org.apache.lucene.store.jdbc.JdbcDirectory.(JdbcDirectory.java:122)
at CreateIndexToDB.createIndex(CreateIndexToDB.java:38)
at CreateIndexToDB.main(CreateIndexToDB.java:87)
And also can you tell me which will perform better.
Storing index in FS or DB?
Thanks in adavnce
Lucene 4.0 and Compass 2.2.0 are not compatible, there are many changes done in Lucene 4.0, and I am not sure if Compass is in active development stage. As far as performance is concerned, definitely database will be a touch slower compared to filesystem
Thanks for the good article and your earliest reply.
Let me know if u happened to succeed using lucene 4.0
I am working on Compass 2.2.0 codebase itself, to get some tutorial up and running. I am not sure if I can get the full library compatible.
hi Prabhat
this is a question you have answered previously in stackoverflow regarding creating JDBC index. I have modified the error log can you suggest any workaround?
http://stackoverflow.com/questions/14356935/exception-thrown-when-try-to-add-documents-to-the-lucene-index-continuously-insi
Replied on SO, hope that helps
Thanks for the reply .Can you tell me is this the right way doing it? (It seem this is wrong still getting the error). May be can you send a simple example of how to change the default values?
JdbcDirectorySettings settings = new JdbcDirectorySettings();
JdbcFileEntrySettings fileEntrySettings=new JdbcFileEntrySettings();
long value = 320000;
fileEntrySettings.setLongSetting(“INDEX_OUTPUT_THRESHOLD_SETTING”, value);
settings.registerFileEntrySettings(“INDEX_OUTPUT_TYPE_SETTING”,fileEntrySettings );
jdbcIn.setJdbcDirectory(new MyJDBCDirectory(JDBCDatabaseUtil.getDataSource(), new MySQLDialect(),settings, “lucene_index_table”));
hi Prabhat, Having some problem with custom “JdbcDirectorySettings” I have post a new question here http://stackoverflow.com/questions/15378162/how-to-increase-compass-lucene-ramandfilejdbcindexoutput-buffer-size. Appreciate if you can suggest any workaround.
Thanks
Hi, I am new here, I did not see where did you test this
example (where is main) ? thanks
Hi Prabhat Jha, Thank you for your excellent article. I
just wanna to ask you i have a problem with MyQSL and Postgresql
9.2 this is the problem :
org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute
sql [insert into LUCENE_INDEX_TABLE (name_, value_, size_, lf_,
deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested
exception is org.postgresql.util.PSQLException: Les Large Objects
ne devraient pas être utilisés en mode auto-commit.
org.postgresql.util.PSQLException: Les Large Objects ne devraient
pas être utilisés en mode auto-commit. at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:239)
at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:226)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setBlob(AbstractJdbc2Statement.java:3048)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput$1.fillPrepareStatement(AbstractJdbcIndexOutput.java:55)
at
org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:174)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at
org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:141) at
org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:139)
at
org.apache.lucene.index.StoredFieldsWriter.flush(StoredFieldsWriter.java:55)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:59)
at
org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at
org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3376)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3485)
at
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)
at
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)
at test.lucene.chaima.JDBCIndexer.addIndex(JDBCIndexer.java:137) at
test.lucene.chaima.JDBCIndexer.index(JDBCIndexer.java:92) at
test.lucene.chaima.JDBCIndexer.createAndBuildIndex(JDBCIndexer.java:78)
at test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:69)
at test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172)
org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute
sql [insert into LUCENE_INDEX_TABLE (name_, value_, size_, lf_,
deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested
exception is org.postgresql.util.PSQLException: Les Large Objects
ne devraient pas être utilisés en mode auto-commit.
org.postgresql.util.PSQLException: Les Large Objects ne devraient
pas être utilisés en mode auto-commit. at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:239)
at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:226)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setBlob(AbstractJdbc2Statement.java:3048)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput$1.fillPrepareStatement(AbstractJdbcIndexOutput.java:55)
at
org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:174)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at
org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at
org.apache.lucene.store.ChecksumIndexOutput.close(ChecksumIndexOutput.java:61)
at
org.apache.lucene.index.SegmentInfos.finishCommit(SegmentInfos.java:863)
at
org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:3501)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3490)
at
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1873)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1812)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1776)
at test.lucene.chaima.JDBCIndexer.index(JDBCIndexer.java:102) at
test.lucene.chaima.JDBCIndexer.createAndBuildIndex(JDBCIndexer.java:78)
at test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:69)
at test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172)
org.apache.lucene.index.IndexNotFoundException: no segments* file
found in test.lucene.chaima.MyJDBCDirectory@9506dc4
lockFactory=null: files: [write.lock] at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:72)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:256)
at test.lucene.chaima.JDBCSearcher.search(JDBCSearcher.java:56) at
test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:70) at
test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172) Thanks in
adavnce
i found many solution but no one solve my problem
please i need the solution if any one can help me
thanks.
The main part of stacktrace I cannot understand since its not in English, can you please provide English version of your stacktrace?
this the stacktrace in English :
org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute
sql [insert into LUCENE_INDEX_TABLE (name_, value_, size_, lf_,
deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested
exception is org.postgresql.util.PSQLException: Les Large Objects
ne devraient pas être utilisés en mode auto-commit.
org.postgresql.util.PSQLException: Large Objects should not be used in mode auto-committed. at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:239)
at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:226)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setBlob(AbstractJdbc2Statement.java:3048)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput$1.fillPrepareStatement(AbstractJdbcIndexOutput.java:55)
at
org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:174)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at
org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:141) at
org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:139)
at
org.apache.lucene.index.StoredFieldsWriter.flush(StoredFieldsWriter.java:55)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:59)
at
org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)
at
org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3376)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3485)
at
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)
at
org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)
at test.lucene.chaima.JDBCIndexer.addIndex(JDBCIndexer.java:137) at
test.lucene.chaima.JDBCIndexer.index(JDBCIndexer.java:92) at
test.lucene.chaima.JDBCIndexer.createAndBuildIndex(JDBCIndexer.java:78)
at test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:69)
at test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172)
org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute
sql [insert into LUCENE_INDEX_TABLE (name_, value_, size_, lf_,
deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested
exception is org.postgresql.util.PSQLException: Les Large Objects
ne devraient pas être utilisés en mode auto-commit.
org.postgresql.util.PSQLException: Les Large Objects ne devraient
pas être utilisés en mode auto-commit. at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:239)
at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:226)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setBlob(AbstractJdbc2Statement.java:3048)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput$1.fillPrepareStatement(AbstractJdbcIndexOutput.java:55)
at
org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:174)
at
org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at
org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at
org.apache.lucene.store.ChecksumIndexOutput.close(ChecksumIndexOutput.java:61)
at
org.apache.lucene.index.SegmentInfos.finishCommit(SegmentInfos.java:863)
at
org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:3501)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3490)
at
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1873)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1812)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1776)
at test.lucene.chaima.JDBCIndexer.index(JDBCIndexer.java:102) at
test.lucene.chaima.JDBCIndexer.createAndBuildIndex(JDBCIndexer.java:78)
at test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:69)
at test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172)
org.apache.lucene.index.IndexNotFoundException: no segments* file
found in test.lucene.chaima.MyJDBCDirectory@9506dc4
lockFactory=null: files: [write.lock] at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:72)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:256)
at test.lucene.chaima.JDBCSearcher.search(JDBCSearcher.java:56) at
test.lucene.chaima.JDBCIndexer.buildIndex(JDBCIndexer.java:70) at
test.lucene.chaima.JDBCIndexer.main(JDBCIndexer.java:172)
Hi,
This is postgress issue ” Large Objects should not be used in mode auto-committed. at
org.postgresql.largeobject.LargeObjectManager.createLO(LargeObjectManager.java:239)” can you try using MySQL. It should just work
Hi Prabhat Jha,
also with mysql it gave me the same issue ” Large Objects should not be used in mode auto-committed”
Hi Prabhat, I am new to this and badly needed an article
like this.To start with, let me thank you for this wonderful
tutorial. I am involved in developing Rest Web Services and now
need to right a Service that takes input as search parameter,
searches the DB against it and returns the result to the client. My
question is: Is it fine to use Apache Lucene for this scenario? If
so, please suggest how can I implement it. I have the code
explained by you and I am able to run it. An index table is created
in the DB. Now, how do I search? Please suggest!
Thank you so much…… Can we index and search from different databases using lucene. Please give me code if so.