Update Info

SUSE-SLE-Module-Packagehub-Subpackages-15-SP6-2024-2620


Recommended update for ant, lucene, mysql-connector-java, univocity-parsers


Type: recommended
Severity: moderate
Issued: 2024-07-30
Description:
This update for ant, lucene, mysql-connector-java, univocity-parsers fixes the following issues:

ant:

- Add forgotten open-test-reporting/events to ant.d/junitlauncher

lucene was updated from version 8.5.0 to 8.11.2:

- API Changes:

  * SimpleFSDirectory is deprecated in favor of NIOFSDirectory.
  * Removed ability to set DocumentsWriterPerThreadPool on IndexWriterConfig.
    The DocumentsWriterPerThreadPool is a packaged protected final class which made it impossible to customize.
  * MergeScheduler#merge doesn't accept a parameter if a new merge was found anymore.
  * SortFields are now responsible for writing themselves into index headers if they are used as index sorts.
  * Deprecate SimpleBindings#add(SortField).
  * MergeScheduler is now decoupled from IndexWriter. Instead it accepts a MergeSource interface that offers the basic
    methods to acquire pending merges, run the merge and do accounting around it.
  * QueryVisitor.consumeTermsMatching() now takes a Supplier<ByteRunAutomaton> to enable queries that build large
    automata to provide them lazily. TermsInSetQuery switches to using this method to report matching terms.
  * DocValues.emptySortedNumeric() no longer takes a maxDoc parameter
  * CodecUtil#checkFooter(IndexInput, Throwable) now throws a CorruptIndexException if checksums mismatch or if
    checksums can't be verified.
  * TieredMergePolicy#setMaxMergeAtOnceExplicit is deprecated and the number of segments that get merged via explicit
    merges is unlimited by default.
  * Lucene's facet module's DocValuesOrdinalsReader.decode method is now public, making it easier for applications to
    decode facet ordinals into their corresponding labels
  * Field comparators for numeric fields and _doc were moved to their own package. TopFieldCollector sets
    TotalHits.relation to GREATER_THAN_OR_EQUAL_TO, as soon as the requested total hits threshold is reached, even
    though in some cases no skipping optimization is applied and all hits are collected.
  * IndexingChain now accepts individual primitives rather than a DocumentsWriterPerThread instance in order to create
    a new DocConsumer.
  * Removed deprecation warning from IndexWriter#getFieldNames().
  * Change the getValue method from IntTaxonomyFacets to be protected instead of private. Users can now access the
    count of an ordinal directly without constructing an extra FacetLabel. Also use variable length arguments for the
    getOrdinal call in TaxonomyReader.
  * DrillSideways allows sub-classes to provide "drill down" FacetsCollectors. They may provide a null collector if
    they choose to bypass "drill down" facet collection.
  * Add a new Directory reader open API from indexCommit and a custom comparator for sorting leaf readers
  * Replaced the ScoreCachingWrappingScorer ctor with a static factory method that ensures unnecessary wrapping doesn't occur.

- New Features:

  * Grouping by range based on values from DoubleValuesSource and LongValuesSource
  * Add IndexWriter merge-on-commit feature to selectively merge small segments on commit, subject to a configurable
    timeout, to improve search performance by reducing the number of small segments for searching
  * Add IndexWriter merge-on-refresh feature to selectively merge small segments on getReader, subject to a
    configurable timeout, to improve search performance by reducing the number of small segments for searching.
  * Doc values now allow configuring how to trade compression for retrieval speed.
  * Add FacetsConfig option to control which drill-down terms are indexed for a FacetLabel
  * RegExpQuery added case insensitive matching option.
  * Add CJKWidthCharFilter and its factory
  * Add utility class to retrieve facet labels from the taxonomy index for a facet field so such fields do not also
    have to be redundantly stored
  * Allow sorting an index after it was created.
    With SortingCodecReader, existing unsorted segments can be wrapped and merged into a fresh index using
    IndexWriter#addIndices API.
  * Custom order for leaves in IndexReader and IndexWriter
  * Added smoothingScore method and default implementation to Scorable abstract class. The smoothing score allows
    scorers to calculate a score for a document where the search term or subquery is not present. The smoothing score
    acts like an idf so that documents that do not have terms or subqueries that are more frequent in the index are not
    penalized as much as documents that do not have less frequent terms or subqueries and prevents scores which are the
    product or terms or subqueries from going to zero. Added the implementation of the Indri AND and the 
    IndriDirichletSimilarity from the academic Indri search engine: http://www.lemurproject.org/indri.php.
  * New LatLonPoint query that accepts an array of LatLonGeometries.
  * New XYPoint query that accepts an array of XYGeometries.
  * TypeAsSynonymFilter has been enhanced support ignoring some types, and to allow the generated synonyms to copy some
    or all flags from the original token
  * A token filter to drop tokens that match all specified flags.
  * PatternTypingFilter has been added to allow setting a type attribute on tokens based on a configured set of regular
    expressions
  * FeatureField supports newLinearQuery that for scoring uses raw indexed values of features without any
    transformation.
  * LatLonPoint query support for spatial relationships.
  * New tool for creating a deterministic index to enable benchmarking changes on a consistent multi-segment index even
    when they require re-indexing.
  * New facet counting implementation for general string doc value fields (SortedSetDocValues / SortedDocValues) not
    created through FacetsConfig
  * The SimpleText codec now writes skip lists.
  * Analyzer and stemmer for Telugu language

- Improvements:

  * Use same code-path for updateDocuments and updateDocument in IndexWriter and DocumentsWriter.
  * Update dictionary version for Ukrainian analyzer to 4.9.1
  * PerFieldDocValuesFormat should not get the DocValuesFormat on a field that has no doc values.
  * Removed ThreadState abstraction from DocumentsWriter which allows pooling of DWPT directly and improves the 
    approachability of the IndexWriter code.
  * Add an ID to SegmentCommitInfo in order to compare commits for equality and make snapshots incremental on
    generational files.
  * TotalHits' relation will be EQUAL_TO when the number of hits is lower than TopDocsColector's numHits
  * Metadata of the terms dictionary moved to its own file, with the '.tmd' extension. This allows checksums of
    metadata to be verified when opening indices and helps saveseeks when opening an index.
  * SegmentInfos#readCommit now always returns a CorruptIndexException if the content of the file is invalid.
  * Make FunctionScoreQuery use ScoreMode.COMPLETE for creating the inner query weight when ScoreMode.TOP_DOCS is
    requested.
  * Make FacetsConfig.DELIM_CHAR publicly accessible
  * UniformSplit supports encodable fields metadata.
  * Improved truncation detection for points.
  * Let MultiCollector handle minCompetitiveScore
  * Add a new ExpressionValueSource which will enforce only one value per name per hit in dependencies,
    ExpressionFunctionValues will no longer recompute already computed values
  * Fix CheckIndex to print an invalid non-zero norm as unsigned long when detecting corruption.
  * FieldInfo#checkConsistency called twice from Lucene50(60)FieldInfosFormat#read; Removed the (redundant?) assert and
    do these checks for real.
  * In BooleanQuery rewrite, always remove MatchAllDocsQuery filter clauses when possible.
  * Improve coverage for Asserting* test classes: make sure to handle singleton doc values, and sometimes exercise
    Weight#scorer instead of Weight#bulkScorer for top-level queries.
  * Include StoredFieldsWriter in DWPT accounting to ensure that it's heap consumption is taken into account when
    IndexWriter stalls or should flush DWPTs.
  * Include TermVectorsWriter in DWPT accounting to ensure that it's heap consumption is taken into account when
    IndexWriter stalls or should flush DWPTs.
  * In query shapes over shape fields, skip points while traversing the BKD tree when the relationship with the
    document is already known.
  * Use more compact datastructures to represent sorted doc-values in memory when sorting a segment before flush
    and in SortingCodecReader.
  * WordDelimiterGraphFilter should order tokens at the same position by endOffset to emit longer tokens first.
    The same graph is produced.
  * Optimize facet counting for single-valued SSDV / StringValueFacetCounts.
  * GlobalOrdinalsWithScore should not compute occurrences when the provided min is 1.
  * ICUNormalizer2CharFilter no longer requires normalization-inert characters as boundaries for incremental
    processing, vastly improving worst-case performance.
  * ExitableTermsEnum should sample timeout and interruption check before calling next().
  * Make CheckIndex concurrent by parallelizing index check across segments.
  * Add compression to terms dict from SortedSet/Sorted DocValues.
  * Binary doc values fields now expose their configured compression mode in the attributes of the field info.
  * BM25FQuery was extended to handle similarities beyond BM25Similarity. It was renamed to CombinedFieldQuery to
    reflect its more general scope.
  * Reduce index size by increasing allowable exceptions in PForUtil from 3 to 7.
  * Hunspell support improvements: add API for spell-checking and suggestions, support compound words, fix various
    behavior differences between Java and C++ implementations, improve performance
  * The BEST_SPEED compression mode now trades more compression ratio in exchange of faster reads.
  * Enable bulk merge for stored fields with index sort.
  * Allow DrillSideways users to provide their own CollectorManager without also requiring them to provide an
    ExecutorService.
  * Extend DrillSideways to support exposing FacetCollectors directly.
  * Support for multi-value fields in LongRangeFacetCounts and DoubleRangeFacetCounts.
  * Added QueryProfilerIndexSearcher and ProfilerCollector to support debugging query execution strategy and timing.
  * Operations.getCommonSuffix/Prefix(Automaton) is now much more efficient, from a worst case exponential down to
    quadratic cost in the number of states + transitions in the Automaton. These methods no longer use the costly
    determinize method, removing the risk of TooComplexToDeterminizeException
  * Operations.determinize now throws TooComplexToDeterminizeException based on too much "effort" spent determinizing
    rather than a precise state count on the resulting returned automaton, to better handle adversarial cases like
    det(rev(regexp("(.*a){2000}"))) that spend lots of effort but result in smallish eventual returned automata.
  * Stop sorting determinize powersets unnecessarily.
  * Evaluate score in DrillSidewaysScorer.doQueryFirstScoring
  * Decrease default for LRUQueryCache's skipCacheFactor to 10. This prevents caching a query clause when it is much
    more expensive than running the top-level query.
  * Make QueryCache respect Accountable queries
  
- Optimizations:

  * UniformSplit keeps FST off-heap.
  * DoubleValuesSource and QueryValueSource now use a TwoPhaseIterator if one is provided by the Query.
  * UsageTrackingQueryCachingPolicy no longer caches DocValuesFieldExistsQuery.
  * FST.Arc.BitTable reads directly FST bytes. Arc is lightweight again and FSTEnum traversal faster.
  * Fail precommit on unparameterised log messages and examine for wasted work/objects
  * Speed up geometry queries by specialising Component2D spatial operations. Instead of using a generic
    relate method for all relations, we use specialize methods for each one. In addition, the type of triangle is
    computed at deserialization time, therefore we can be more selective when decoding points of a triangle.
  * Build always trees with full leaves and lower the default value for maxPointsPerLeafNode to 512.
  * Points now write their index in a separate file.
  * Add an ability for field comparators to skip non-competitive documents. Creating a TopFieldCollector with
    totalHitsThreshold less than Integer.MAX_VALUE instructs Lucene to skip non-competitive documents whenever
    possible. For numeric sort fields the skipping functionality works when the same field is indexed both with doc
    values and points. To indicate that the same data is stored in these points and doc values
    SortField#setCanUsePoints method should be used.
  * ConstantValuesSource now shares a single DoubleValues instance across all segments
  * Stored fields now get higer compression ratios on highly compressible data.
  * FunctionMatchQuery now accepts a "matchCost" optimization hint.
  * Indexing with an index sort is now faster by not compressing temporary representations of the data.
  * Enhance DocComparator to provide an iterator over competitive documents when searching with "after". This iterator
    can quickly position on the desired "after" document skipping all documents and segments before "after".
  * QueryParser: re-use the LookaheadSuccess exception.
  * WANDScorer now supports queries that have a 'minimumNumberShouldMatch' configured.
  * Reduced memory usage for OrdinalMap when a segment has all values.
  * Faster decoding of postings for some numbers of bits per value.
  * Substantially improve RAM efficiency of how MemoryIndex stores postings in memory, and reduced a bit of RAM
    overhead in IndexWriter's internal postings book-keeping
  * Speed up merging of stored fields and term vectors for smaller segments.
  * Performance improvement for BKD index building
  * Improved memory efficiency of IndexWriter's RAM buffer, in particular in the case of many fields and many indexing
    threads.
  * Lucene90DocValuesFormat was using too many bits per value when compressing via gcd, unnecessarily wasting index
    storage.
  * Rewrite empty DisjunctionMaxQuery to MatchNoDocsQuery.
  * Slightly faster segment merging for sorted indices.
  * Improve IntroSorter with 3-ways partitioning
  * FacetsCollector will not request scores if it does not use them

- Bugs fixed:

  * Fix corruption of the new gen field infos when doc values updates are applied on a segment created externally and
    added to the index with IndexWriter#addIndexes(Directory).
  * Holding levenshtein automata on FuzzyQuery can end up blowing up query caches which use query objects as cache
    keys, so building the automata is now delayed to search time again.
  * Fix wrong NGramFilterFactory argument name for preserveOriginal option
  * DocValuesRewriteMethod.visit wasn't visiting its embedded query
  * DocTermsIndexDocValues assumed it was operating on a SortedDocValues (single valued) field when it could be
    multi-valued used with a SortedSetSelector
  * Ensure IW processes all internal events before it closes itself on a rollback.
  * Return default value from objectVal when doc doesn't match the query in QueryValueSource
  * Fix for potential NPE in TermFilteredPresearcher for empty fields
  * Wait for #addIndexes merges when aborting merges.
  * Ensure CMS updates it's thread accounting datastructures consistently. CMS today releases it's lock after finishing
    a merge before it re-acquires it to update the thread accounting datastructures. This causes threading issues where
    concurrently finishing threads fail to pick up pending merges causing potential thread starvation on forceMerge
    calls
  * Single-document monitor runs were using the less efficient MultiDocumentBatch implementation.
  * Fix equality check in ExpressionValueSource#rewrite. This fixes rewriting of inner value sources.
  * IndexWriter incorrectly calls closeMergeReaders twice when the merged segment is 100% deleted.
  * Tessellator might build illegal polygons when several holes share the shame vertex.
  * Tessellator might build illegal polygons when several holes share are connected to the same vertex.
  * Fix ordered intervals over interleaved terms
  * The UnifiedHighlighter was closing the underlying reader when there were multiple term-vector fields. This was a
    regression in 8.6.0.
  * Prevent DWPTDeleteQueue from referencing itself and leaking memory. The queue passed an implicit this reference to
    the next queue instance on flush which leaked about 500byte of memory on each full flush, commit or getReader call.
  * Fix a regression where the unified highlighter didn't produce highlights on fuzzy queries that correspond to exact
    matches.
  * Fix NRTCachingDirectory to use Directory#fileLength to check if a file already exists instead of opening an
    IndexInput on the file which might throw a AccessDeniedException in some Directory implementations.
  * Fixed a bug in IndexSortSortedNumericDocValuesRangeQuery where it could violate the DocIdSetIterator contract.
  * Include field in ComplexPhraseQuery's toString()
  * Fix TermRangeQuery when there is no upper bound and the lower bound is the empty string excluded. This would
    previously match no strings at all while it should match all non-empty strings.
  * Fix NPE in SpanWeight#explain when no scoring is required and SpanWeight has null Similarity.SimScorer.
  * DocumentsWriter was only stalling threads for 1 second allowing documents to be indexed even the DocumentsWriter
    wasn't able to keep up flushing. Unless IW can't make progress due to an ill behaving DWPT this issue was barely
    noticeable.
  * Japanese tokenizer should discard the compound token instead of disabling the decomposition of long tokens when 
    discardCompoundToken is activated.
  * Make Component2D#withinPoint implementations consistent with ShapeQuery logic.
  * Wrap boolean queries generated by shape fields with a Constant score query.
  * Fix per-field memory leak in IndexWriter.deleteAll(). Reset next available internal field number to 0 on
    FieldInfos.clear(), to avoid wasting FieldInfo references.
  * BM25FQuery - Mask encoded norm long value in array lookup.
  * When encoding triangles in ShapeField, make sure generated triangles are CCW by rotating triangle points before
    checking triangle orientation.
  * Fix deadlock in TermsEnum.EMPTY that occurs when trying to initialize TermsEnum and BaseTermsEnum at the same time
  * NPE on a degenerate query in MinimumShouldMatchIntervalsSource $MinimumMatchesIterator.getSubMatches().
  * DoubleValuesSource.fromQuery (also used by FunctionScoreQuery.boostByQuery) could throw an exception when the query
    implements TwoPhaseIterator and when the score is requested repeatedly.
  * BytesRefHash.equals/find is now thread safe, fixing a Luwak/Monitor bug causing registered queries to sometimes
    fail to match.
  * Fix Circle2D intersectsLine t-value (distance) range clamp
  * Fixed parameter use in RadixSelector.
  * LongValueFacetCounts should count each document at most once when determining the total count for a dimension.
    Prior to this fix, multi-value docs could contribute a > 1 count to the dimension count.
  * Fixed performance regression for boolean queries that configure a minimum number of matching clauses.
  * FlattenGraphFilter is now more robust when handling incoming holes in the input token graph
  * Duplicate long values in a document field should only be counted once when using SortedNumericDocValuesFields
  * Do not throw NullPointerException while trying to handle another exception in ReplicaNode.start
  * Fix DrillSideways correctness bug
  * Fix edge case failure in TestStringValueFacetCounts
  * CombinedFieldQuery can fail with an exception when document is missing some fields.
  * Respect ignoreCase in CommonGramsFilterFactory
  * DocComparator should not skip docs with the same docID on multiple sorts with search after
  * Fix CombinedFieldQuery equals and hashCode, which ensures query rewrites don't drop CombinedFieldQuery clauses.
  * Correct CombinedFieldQuery scoring when there is a single field.
  * Counting bug fixed in StringValueFacetCounts.
  * Ensure DrillSidewaysQuery instances never get cached.
  * Skip deleted docs when accumulating facet counts for all docs
  * KoreanTokenizer should check the max backtrace gap on whitespaces.
  * Sort optimization can wrongly skip the first document of each segment
  * MultiCollector now handles single leaf collector that wants to skip low-scoring hits but the combined score
    mode doesn't allow it
  * Missing calculating the bytes used of DocsWithFieldSet in NormValuesWriter
  * Missing calculating the bytes used of DocsWithFieldSet and currentValues in SortedSetDocValuesWriter
  * Sort optimization with search_after can wrongly skip documents whose values are equal to the last value of the
    previous page
  * Sort optimization with a chunked bulk scorer can wrongly skip documents
  * ConcurrentSortedSetDocValuesFacetCounts shouldn't share liveDocs Bits across threads
  * NumericLeafComparator to define getPointValues
  * Ensure that the minimum competitive score does not decrease in concurrent search
  * Highlighter:
    WeightedSpanTermExtractor.extractWeightedSpanTerms to Query#rewrite multiple times if necessary
  * Make sure SparseFixedBitSet#or updates ramBytesUsed

- Documentation:

  * Add a performance warning to AttributeSource.captureState javadocs

- Changes in runtime behaviour:

  * SortingCodecReader now doesn't cache doc values fields anymore. Previously, SortingCodecReader used to cache all
    doc values fields after they were loaded into memory.
    This reader should only be used to sort segments after the fact using IndexWriter#addIndices.

* Other changes:

  * Always keep FST off-heap. FSTLoadMode, Reader attributes and openedFromWriter removed.
  * Checksums of the terms index are now verified when LeafReader#checkIntegrity is called rather than when opening the
    index.
  * Update Javadoc about normalizeEntry in the Kuromoji DictionaryBuilder.
  * Make TestLatLonMultiPolygonShapeQueries more resilient for CONTAINS queries.
  * Adjust TestLucene60PointsFormat#testEstimatePointCount2Dims so it does not fail when a point is shared by multiple
    leaves.
  * ByteBufferIndexInput was refactored to work on top of the ByteBuffer API.
  * Make LineFileDocs's random seeking more efficient, making tests using LineFileDocs faster
  * Refactors SimpleBindings to improve type safety and cycle detection
  * Change the way the multi-dimensional BKD tree builder generates the intermediate tree representation to be equal to
    the one dimensional case to avoid unnecessary tree and leaves rotation.
  * poll_mirrors.py release script can handle HTTPS mirrors.
  * Fix or suppress 13 resource leak precommit warnings in lucene/replicator
  * Always keep BKD index off-heap. BKD reader does not implement Accountable any more.
  * Refactor BKD point configuration into its own class.
  * Make TestXYMultiPolygonShapeQueries more resilient for CONTAINS queries.
  * Move LockFactory stress test to be a unit/integration test.
  * Removes some unused code and replaces the Point implementation on ShapeField/ShapeQuery random tests.
  * Removed the pure Maven build. It is no longer possible to build artifacts using Maven (this feature was no longer
    working correctly). Due to migration to Gradle for Lucene/Solr 9.0, the maintenance of the Maven build was no
    longer reasonable. POM files are generated for deployment to Maven Central only. Please use "ant generate-maven-artifacts"
    to produce and deploy artifacts to any repository.
  * Migrate Maven tasks to use "maven-resolver-ant-tasks" instead of the no longer maintained "maven-ant-tasks".
  * Upgrade jetty to 9.4.41
  * Fix WANDScorer assertion error.
  * Add docs/links to GermanAnalyzer describing how to decompound nouns
  *  Update Jetty to 9.4.34

mysql-connector-java was updated to version 8.4.0:

- Removed OpenTelemetry support, which was added upstream
- Avoid producing dupplicate maven data
- Changes in version 8.4.0:

  * Added support for VECTOR data type.
  * Fixed tests failing due to removal of deprecated features.
  * Fixed join condition for retrieval of imported primary keys.
  * GPL License Exception Update.
  * Updated SyntaxRegressionTest.java.
  * Replaced StringBuffer with StringBuilder in ValueEncoders
  * Fixed DatabaseMetaData that specifies incorrect extra name characters.
  * Fixed setting the FetchSize on a Statement object does not affect.
  * Fixed GETPARAMETERBINDINGS() ON A PS RETURNS NPE WHEN NOT ALL PARAMETERS ARE BOUND.
  * Removed support for FIDO authentication
  * Only call Messages.getString(...) when it's needed (when the SQLException is thrown)
  * CLIENT HANG WHEN LOADBALANCESTRATEGY IS BESTRESPONSETIME.

- Includes changes from 8.3.0:

  * Fixed redundant "Reset stmt" when setting useServerPrepStmts&cachePrepStmts to true
  * Fixed COMMENT PARSING IS NOT PROPER IN CONNECTOR JDBC.
  * Fixed setting a large timeout leads to errors when executing SQL.
  * Upgrade 3rd party libraries and tools.
  * Upgrade Protocol Buffers dependency to protobuf*java-3.25.1.
  * Fixed issue with mysql-connector-j 8.0.33 connector (XDEVAPI) - getsession is slow.
  * Fixed CallableStatement::getParameterMetaData reports incorrect parameterCount.
  * Fixed executeUpdate throws SQLException on queries that are only comments.
  * getWarnings() of StatementImpl contains all warnings.
  * Fixed Unexpected list of permitted ciphers.
  * Fixed jdbc.MysqlParameterMetadata#isNullable doesnt check whether to be simple.
  * Fixed Parameter metadata inferred incorrectly when procedure or function doesn't exist.
  * Fixed execution of a stored procedure if exists function with same name.

- Changes in version 8.2.0:

  * Added the missing implementation for  Connection.releaseSavepoint()
  * Connector/J now supports WebAuthn Authentication. See Connecting Using Web Authentication (WebAuthn) Authentication
    for details.
  * The auto-deserialization function for BLOB objects, deprecated  since release 8.1.0, is now removed.
  * The SessionStateChanges objects failed to provide proper values for section state changes. This was because
    Connector/J parsed the OK_Packet incorrectly, and this patch fixes the issue. 
  * Using javax.sql.rowset.CachedRowSet#getDate() or javax.sql.rowset.CachedRowSet#getTimestamp() on DATETIME fields
    resulted in a ClassCastException. It was because the default return type of DATETIME fields by
    ResultSet.getObject() was java.time.LocalDateTime instead of java.sql.Timestamp. To prevent the exception, a new
    connection property, treatMysqlDatetimeAsTimestamp, now allows the return type of DATETIME by ResultSet.getObject()
    to be changed to java.sql.Timestamp
  * Obtaining a connection from a MysqlConnectionPoolDataSource made Connector/J reset its connection state unless the
    connection property paranoid was set to be true. During the reset, the autocommit mode of the session was restored
    to the default value specified on the server by the system variable autocommit, while  the JDBC specification
    mandates that autocommit be always enabled for a freshly created connection. With this patch, the connection reset
    will always enable autocommit in the situation.

- Changes in version 8.1.0:

  * Deprecated autoDeserialize feature.
  * Fix  KeyManagementException: FIPS mode: only SunJSSE TrustManagers may be used.
  * Fixed Issue in JDBC PreparedStatement on adding NO_BACKSLASH_ESCAPES in sql_mode.

univocity-parsers:

- Add Automatic-Module-Name to the manifest



              

References


No references

Packages


  • ant-antlr-1.10.14-150200.4.28.1
  • ant-junit5-1.10.14-150200.4.28.1
  • lucene-8.11.2-150200.4.7.1
  • univocity-parsers-2.9.1-150200.3.7.8