The following items describe additional changes and information about this release. In some cases, the descriptions provide links to additional detailed information about an issue or a change. This page does not duplicate the descriptions provided by the other JDK 9 Release Notes pages and:
You should be aware of the content in those documents as well as the items described on this page.
The descriptions below also identify potential compatibility issues that you might encounter when migrating to JDK 9. See the JDK 9 Migration Guidefor descriptions of specific compatibility issues.
The Kinds of Compatibility page on the OpenJDK wiki identifies three types of potential compatibility issues for Java programs used in these descriptions:
Source: Source compatibility concerns translating Java source code into class files.
Binary: Binary compatibility is defined in The Java Language Specification as preserving the ability to link without error.
Behavioral: Behavioral compatibility includes the semantics of the code that is executed at runtime.
See the Compatibility & Specification Review (CSR) page on the OpenJDK wiki for more information about compatibility as it relates to JDK 9.
The value of the static final int field java.awt.font.OpenType.TAG_OPBD
was incorrect
It was erroneously using the same value as TAG_MORT
0x6D6F7274UL
and it has been changed to the correct 0x6F706264UL
Although this is strictly an incompatible binary change the likelihood of any practical impact on applications is near zero. The opbd table is used only in AAT fonts: https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6opbd.html and as such is likely to be extremely rare in the wild as they are natively understood only by MacOS and iOS. This table is not critical to rendering of text by Java or anything else. As such nothing goes looking for the table and nothing inside JDK utilises any part of this class.
The JDK does not provide a way to directly utilize these values. No Java API currently exists that accepts them and the class can not become useful unless an additional Java API is added.
Even if an application were to use it by passing the Java field's value to some custom native code to look up a table then it is likely to return "null" both before and afterward since:
A representative sampling of 6 OS X fonts found none of them to have either table
The lifecycle management of AWT menu components exposed problems on certain platforms. This fix improves state synchronization between menus and their containers.
There are some platforms like Mac OS X 10.11 that may not support showing the user-specified title in a file dialog.
The following description is added to the java.awt.FileDialog class constructors and setTitle(String) method: "Note: Some platforms may not support showing the user-specified title in a file dialog. In this situation, either no title will be displayed in the file dialog's title bar or, on some systems, the file dialog's title bar will not be displayed".
Three static fields exposing event listener instances whose types are internal and intended use was internal are now made private. These are very unlikely to have been used by many applications as until recently they were shipped only as an unbundled component.
Since Java SE 1.4 javax.imageio.spi.ServiceRegistry
provided a facility roughly equivalent to the Java SE 1.6 java.util.ServiceLoader
. This image i/o facility is now restricted to supporting SPIs defined as part of javax.imageio
. Applications which use it for other purposes need to be re-coded to use ServiceLoader
.
The MouseWheelEvent.getWheelRotation()
method returned rounded native NSEvent deltaX/Y
events on Mac OS X. The latest macOS Sierra 10.12 produces very small NSEvent deltaX/Y
values so rounding and summing them leads to the huge value returned from the MouseWheelEvent.getWheelRotation()
. The JDK-8166591 fix accumulates NSEvent deltaX/Y
and the MouseWheelEvent.getWheelRotation()
method returns non-zero values only when the accumulated value exceeds a threshold and zero value. This is compliant with the MouseWheelEvent.getWheelRotation()
specification:
https://docs.oracle.com/javase/8/docs/api/java/awt/event/MouseWheelEvent.html#getWheelRotation--
Returns the number of "clicks" the mouse wheel was rotated, as an integer. A partial rotation may occur if the mouse supports a high-resolution wheel. In this case, the method returns zero until a full "click" has been accumulated.
For the precise wheel rotation values, use the MouseWheelEvent.getPreciseWheelRotation()
method instead.
The focus behavior of Swing toggle button controls (JRadioButton and JCheckBox) changed when they belonged to a button group. Now, if the input focus is requested to any toggle button in the group through either focus traversal or window activation, the currently selected toggle button is focused regardless of the focus traversal policy used in the container. If the selected toggle button is not eligible to be a focus owner, the focus is set according to the focus traversal policy.
The ProgressMonitor dialog can be closed in following ways :
If the ProgressMonitor dialog is closed, ProgressMonitor.isCanceled() method used to return 'true' in only cases (1) and (2) above. This fix corrects the behavior where ProgressMonitor.isCanceled() method will return 'true' in case the ProgressMonitor dialog is closed by pressing Escape key.
There is low compatibility impact of this fix : This change may impact user code that (incorrectly) assumes ProgressMonitor.isCanceled() will return false even if the ProgressMonitor dialog is closed as a result of pressing Escape key. Also, with this change, now there is no way to get the ProgressMonitor dialog out of way whilst having progress continue.
Some applications have used core reflection to instantiate JDK internal Swing L&Fs, i.e system L&Fs such as The Windows L&F : Class.forName(" com.sun.java.swing.plaf.windows.WindowsLookAndFeel")
These classes are internal to the JDK and applications should have always treated them as such.
As of JDK 9 whether these are accessible to applications depends on the configuration of the Java Platform Module System and the value of the --illegal-access setting. By default in JDK 9 its value is "permit", but this is expected to change to "deny" in a future release.
Applications which need to create a system L&F must migrate to use the new method : javax.swing.UIManager.createLookAndFeel(String name)
.
JDK
The java.io
classes CharArrayReader
, PushbackReader
, and StringReader
might now block in close()
if there is another thread holding the Reader.lock
lock.
The read()
method of these classes could previously throw a NullPointerException
if the internal state of the instance had become inconsistent. This was caused by a race condition due to close()
not obtaining a lock before modifying the internal state of the Reader
. This lock is now obtained which can result in close()
blocking if another thread simultaneously holds the same lock on the Reader
.
Prior to JDK 9, creating a FilePermission object canonicalized its pathname, and the implies and equals methods were based on this canonicalized pathname. For example, if "file" and "/path/to/current/directory/file" point to the same file in the file system, two FilePermission objects from these pathnames are equal and imply each other if their actions are also the same.
In JDK 9, the pathname will not be canonicalized by default. This means two FilePermission objects will not equal each other if one uses an absolute path and the other a relative path, or one uses a symbolic link and the other the target, or one uses a Windows long name and the other a DOS-style 8.3 name, even if they point to the same file in the file system.
A compatibility layer has been added to ensure that granting a FilePermission for a relative path will still permit applications to access the file with an absolute path (and vice versa). This works for the default Policy provider and the limited doPrivileged (http://openjdk.java.net/jeps/140) calls. For example, although a FilePermission on a file with a relative pathname of "a" no longer implies a FilePermission on the same file with an absolute pathname of "/pwd/a" (suppose "pwd" is the current working directory), granting code a FilePermission to read "a" allows that code to also read "/pwd/a" when a Security Manager is enabled. This compatibility layer does not cover translations between symbolic links and targets, or Windows long names and DOS-style 8.3 names, or any other different name forms that can be canonicalized to the same name.
A system property named jdk.io.permissionsUseCanonicalPath has been introduced. When it is set to "true", FilePermission will canonicalize its pathname as it did before JDK 9. The default value of this property is "false".
Another system property named jdk.security.filePermCompat has also been introduced. When set to "true", the compatibility layer described above will also apply to third-party Policy implementations. The default value of this property is "false".
Class.getSimpleName() was changed to use the name recorded in the InnerClasses attribute of the class file. This change may affect applications which generate custom bytecode with incomplete or incorrect information recorded in the InnerClasses attribute.
This enhancement changes phantom references to be automatically cleared by the garbage collector as soft and weak references.
An object becomes phantom reachable after it has been finalized. This change may cause the phantom reachable objects to be GC'ed earlier - previously the referent is kept alive until PhantomReference objects are GC'ed or cleared by the application. This potential behavioral change might only impact existing code that would depend on PhantomReference being enqueued rather than when the referent be freed from the heap.
The deprecated checkTopLevelWindow
, checkSystemClipboard
, and checkAccessAwtEventQueueAccess
in java.lang.SecurityManager
have been changed to check AllPermission
, they no longer check AWTPermission
. Libraries that invoke these SecurityManager methods to do permission checks may require users of the library to change their policy files.
The spec of the following java.lang.ClassLoader
methods for locating a resource by name are updated to throw NullPointerException
when the specified name is null:
getResource(String)
getResourceAsStream(String)
getResources(String)
Custom class loader implementations that override these methods should be updated accordingly to conform to this spec.
java.lang.ref.Reference.enqueue
method clears the reference object before it is added to the registered queue. When the enqueue
method is called, the reference object is cleared and get()
method will return null in JDK 9.
Typically when a reference object is enqueued, it is expected that the reference object is cleared explicitly via the clear
method to avoid memory leak because its referent is no longer referenced. In other words the get
method is expected not to be called in common cases once the enqueue
method is called. In the case when the get
method from an enqueued reference object and existing code attempts to access members of the referent, NullPointerException
may be thrown. Such code will need to be updated.
The internal package sun.invoke.anon has been removed. The functionality it used to provide, namely anonymous class loading with possible constant pool patches, is available via the Unsafe.defineAnonymousClass() method.
A behavioural change has been made to class java.lang.invoke.LambdaMetafactory
so that it is no longer possible to construct an instance. This class only has static methods to create "function objects" (commonly utilized as bootstrap methods) and should not be instantiated. The risk of source and binary incompatibility is very low; analysis of existing code bases found no instantiations.
The invokedynamic
byte code instruction is no longer specified by the Java Virtual Machine Specification to wrap any Throwable
thrown during linking in java.lang.invoke.BootstrapMethodError
, which is then thrown to the caller.
If during linking an instance of Error
, or a subclass of, is thrown then that Error
is no longer wrapped and is thrown directly to the caller. Any other instance of Throwable
, or subclass of, is still wrapped in java.lang.invoke.BootstrapMethodError
.
This change in behaviour ensures that errors such as OutOfMemoryError
or ThreadDeath
are thrown unwrapped and may be acted on or reported directly, thereby enabling more uniform replacement of byte code with an invokedynamic
instruction whose call site performs the same functionality as the replaced byte code (and may throw the same errors).
The method java.lang.invoke.MethodHandles.bind
has been fixed to correctly obey the access rules when binding a receiver object to a protected
method.
The javadoc for the Class.getMethod and Class.getMethods refer to the definition of inheritance in the Java Language Specification. Java SE 8 changed these rules in order to support default methods and reduce the number of redundant methods inherited from superinterfaces (see JLS 8, 8.4.8).
Class.getMethod and Class.getMethods were not updated with the 8 release to match the new inheritance definition (both may return non-inherited superinterface methods). The implementation has now been changed to filter out methods that are not members of the class.
java.lang.reflect.Field.get(), Field.get{primitive}() and java.lang.reflect.Method.invoke() have been updated to use the primitive wrapper classes' valueOf() (for example Integer.valueOf()) instead of always creating new wrappers with "new" (for example new Integer()) after the reflection libraries have (potentially) optimised the Field/Method instance. This can affect applications that depended on two wrappers being != while still being .equals().
The behavior of getAnnotatedReceiverType()
has been clarified to return an empty AnnotatedType object only for a method/constructor which could conceptually have a receiver parameter but does not have one at present. (Since there is no receiver parameter, there are no annotations to return.) In addition, the behavior of getAnnotatedReceiverType() has been clarified to return null for a method/constructor which cannot ever have a receiver parameter (and therefore cannot have annotations on the type of a receiver parameter): static methods, and constructors of non-inner classes. Incompatibility: Behavioral
The exact toString output of an annotation is deliberately not specified; from java.lang.annotation.Annotation.toString():
Returns a string representation of this annotation. The details of the representation are implementation-dependent [...]
Previously, the toString format of an annotation did not output certain information in a way that would be usable for a source code representation of an annotation, string values were not surrounded by double quote characters, array values were surrounded by brackets ("[]") rather than braces ("{}"), etc.
As a behavioral change, the annotation output has been updated to be faithful to a source code representation of the annotation.
In Java SE 9 the requirement to support multicasting has been somewhat relaxed, in order to support a small number of platforms where multicasting is not available. The specification for the java.net.MulticastSocket::joinGroup
and the java.nio.channels.MulticastChannel::join
methods has been updated to indicate that an UnsupportedOperationException
will be thrown if invoked on a platform that does not support multicasting.
There is no impact to Oracle JDK platforms, since they do support multicasting.
In some environments certain authentication schemes may be undesirable when proxying HTTPS. Accordingly, the Basic
authentication scheme has been deactivated, by default, in the Oracle Java Runtime, by adding Basic
to the jdk.http.auth.tunneling.disabledSchemes
networking property in the net.properties file. Now, proxies requiring Basic
authentication when setting up a tunnel for HTTPS will no longer succeed by default. If required, this authentication scheme can be reactivated by removing Basic
from the jdk.http.auth.tunneling.disabledSchemes
networking property, or by setting a system property of the same name to "" ( empty ) on the command line.
Additionally, the jdk.http.auth.tunneling.disabledSchemes
and jdk.http.auth.proxying.disabledSchemes
networking properties, and system properties of the same name, can be used to disable other authentication schemes that may be active when setting up a tunnel for HTTPS, or proxying plain HTTP, respectively.
The behavior of HttpURLConnection when using a ProxySelector has been modified with this JDK release. HttpURLConnection used to fall back to a DIRECT connection attempt if the configured proxy(s) failed to make a connection. This release introduces a change whereby no DIRECT connection will be attempted in such a scenario. Instead, the HttpURLConnection.connect() method will fail and throw an IOException which occurred from the last proxy tested.
Class loaders created by the java.net.URLClassLoader.newInstance
methods can be used to load classes from a list of given URLs. If the calling code does not have access to one or more of the URLs, and the URL artifacts that can be accessed do not contain the required class, then a ClassNotFoundException, or similar, will be thrown. Previously, a SecurityException would have been thrown when access to a URL was denied. If required to revert to the old behavior, this change can be disabled by setting the jdk.net.URLClassPath.disableRestrictedPermissions
system property.
A new JDK implementation specific system property to control caching for HTTP NTLM connection is introduced. Caching for HTTP NTLM connection remains enabled by default, so if the property is not explicitly specified, there will be no behavior change.
On some platforms, the HTTP NTLM implementation in the JDK can support transparent authentication, where the system user credentials are used at the system level. When transparent authentication is not available or unsuccessful, the JDK only supports getting credentials from a global authenticator. If connection to the server is successful, the authentication information will then be cached and reused for further connections to the same server. In addition, connecting to an HTTP NTLM server usually involves keeping the underlying connection alive and reusing it for further requests to the same server. In some applications, it may be desirable to disable all caching for the HTTP NTLM protocol in order to force requesting new authentication with each new request to the server.
With this fix, we now provide a new system property that will allow control of the caching policy for HTTP NTLM connections. If jdk.ntlm.cache
is defined and evaluates to false
, then all caching will be disabled for HTTP NTLM connections. Setting this system property to false may, however, result in undesirable side effects:
Authenticator
implementation, may result in a popup asking the user for credentials for every new request.
The current implementation of java.net.HttpCookie
can only be used to parse cookie headers generated by a server and sent in a HTTP response as a Set-Cookie
or Set-Cookie2
header. It does not support parsing of client generated cookie headers.
This is not completely clear from the API documentation of that class. The documentation could be updated to make the current behavior clearer, or preferably, the implementation could be updated to support both behaviors in a future release.
A new JDK implementation specific system property to control caching for HTTP SPNEGO (Negotiate/Kerberos) connections is introduced. Caching for HTTP SPNEGO connections remains enabled by default, so if the property is not explicitly specified, there will be no behavior change.
When connecting to an HTTP server which uses SPNEGO to negotiate authentication, and when connection and authentication with the server is successful, the authentication information will then be cached and reused for further connections to the same server. In addition, connecting to an HTTP server using SPNEGO usually involves keeping the underlying connection alive and reusing it for further requests to the same server. In some applications, it may be desirable to disable all caching for the HTTP SPNEGO (Negotiate/Kerberos) protocol in order to force requesting new authentication with each new requests to the server.
With this fix, we now provide a new system property that will allow control of the caching policy for HTTP SPNEGO connections. If jdk.spnego.cache
is defined and evaluates to false
, then all caching will be disabled for HTTP SPNEGO connections. Setting this system property to false may however result in undesirable side effects:
Authenticator
implementation, may result in a popup asking the user for credentials for every new request.
The sentence
"The SecurityManager.checkDelete(String)
method is invoked to check delete access if the file is opened with the DELETE_ON_CLOSE
option."
was appended to the verbiage of the SecurityException throws clause in the specifications of the newBufferedWriter()
and write()
methods of java.nio.file.Files.
The java.nio.channels.FileLock
constructors will now throw a NullPointerException
if called with a null
Channel
parameter. To avoid an unexpected behavior change, subclasses of FileLock should therefore ensure that the Channel
they pass to the superclass constructor is non-null
.
The RMI multiplex protocol is disabled by default. It can be re-enabled by setting the system property "sun.rmi.transport.tcp.enableMultiplexProtocol" to "true".
The performance of java.time.zone.ZoneRulesProvider.getAvailableZoneIds() is improved by returning an unmodifiable set of zone ids; previously the set was modifiable.
Boundaries specified by java.time.temporal.ChronoField.EPOCH_DAY have been corrected to match the epoch day of LocalDate.MIN and LocalDate.MAX
The Java SE 8 specification for java.time.Clock
states that ''The system factory methods provide clocks based on the best available system clock. This may use System.currentTimeMillis()
, or a higher resolution clock if one is available.'' In JDK 8 the implementation of the clock returned was based on System.currentTimeMillis()
, and thus has only a millisecond resolution. In JDK 9, the implementation is based on the underlying native clock that System.currentTimeMillis()
is using, providing the maximum resolution available from that clock. On most systems this can be microseconds, or sometimes even tenth of microseconds.
An application making the assumption that the clock returned by these system factory methods will always have milliseconds precision and actively depends on it, may therefore need to be updated in order to take into account the possibility of a greater resolution, as was stated in the API documentation. It is also worth noting that a new Clock.tickMillis(zoneId)
method has been added to allow time to be obtained at only millisecond precision - see: http://download.java.net/java/jdk9/docs/api/java/time/Clock.html#tickMillis-java.time.ZoneId-.
JDK JDK 9 contains IANA time zone data version 2016j. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016d. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016f. For more information, refer to Timezone Data Versions in the JRE Software.
JDK JDK 9 contains IANA time zone data version 2016i. For more information, refer to Timezone Data Versions in the JRE Software.
java.util.Properties
is a subclass of the legacy Hashtable
class, which synchronizes on itself for any access. System properties are stored in a Properties
object. They are a common way to change default settings, and sometimes must be read during classloading.
System.getProperties()
returns the same Properties
instance accessed by the system, which any application code might synchronize on. This situation has lead to deadlocks in the past, such as 6977738.
The Properties
class has been updated to store its values in an internal ConcurrentHashMap
(instead of using the inherited Hashtable
mechanism), and its getter methods and legacy Enumeration
s are no longer synchronized. This should reduce the potential for deadlocks. It also means that since Properties
' Iterator
s are now generated by ConcurrentHashMap
, they don't fail-fast - ConcurrentModificationException
s are no longer thrown.
The specification of the class java.util.prefs.Preferences
was modified to disallow the use of any String containing the null control character, code point U+0000, in any String used as the key or value parameter in any of the abstract put*(), get*(), and remove methods. If such a character is detected, an IllegalArgumentException
shall be thrown.
The specification of the class java.util.prefs.AbstractPreferences
was modified according to the corresponding change in its superclass java.util.prefs.Preferences to disallow the use of any String containing the null control character, code point U+0000, in any String used as the key or value parameter in any of the put*(), get*(), and remove() method implementations. These method implementations were modified to throw an IllegalArgumentException
upon encountering such a character in a key or value String in these contexts. Also, the class specification was modified to correct the erroneous reference to the flush() and sync() methods as returning a boolean value when they are in fact void.
java.util.Properties
defines the loadFromXML
and storeToXML
methods for Properties
stored in XML documents. XML specifications only require XML processors to read entities in UTF-8 and UTF-16 and the API docs for these methods only require an implementation to support UTF-8 and UTF-16. The implementation of these methods has changed in JDK 9 to use a smaller XML parser which may impact applications that have been using these methods with other encodings. The new implementation does not support all encodings that the legacy implementation had support for, in particular it does not support UTF-32/UCS-4, IBM* or x-IBM-* encodings. For maximum portability, applications are encouraged to use UTF-8 and UTF-16.
As part of the fix for JDK-8006627, a check of the String
parameter of java.util.UUID.fromString(String)
was added which will result in an IllegalArgumentException
being thrown if the length of the parameter is greater than 36.
The specification of the default locales used in Formatter related classes has been clarified to designate the default locale for formatting (Locale.Category.FORMAT).
In Java SE 9, threads that are part of the fork/join common pool will always return the system class loader as their thread context class loader. In previous releases, the thread context class loader may have been inherited from whatever thread causes the creation of the fork/join common pool thread, e.g. by submitting a task. An application cannot reliably depend on when, or how, threads are created by the fork/join common pool, and as such cannot reliably depend on a custom defined class loader to be set as the thread context class loader.
The ZipFile implementation has changed significantly in JDK 9 to improve reliability. A consequence of these changes is that the implementation now rejects ZIP files where the month or day in a MS-DOS date/time field is 0. While technically invalid, these ZIP files were not rejected in previous release. A future release will address this issue.
zlib issue #275 tracks an issue in zlib 1.2.11
that may impact applications using the java.util.zip.Deflater
API when this version of zlib
is installed (Ubuntu 17.04 for example). Specifically, it may impact code that changes the compression level or strategy and then resets the deflater. More details can be found in JDK-8184306. The JDK includes a patched version of zlib
on Microsoft Windows so this issue does not impact that platform.
java.util.zip.ZipEntry API doc specifies "A directory entry is defined to be one whose name ends with a '/'". However, in previous JDK releases java.util.zip.ZipFile.getEntry(String entryName) may return a ZipEntry instance with an entry name that does not end with '/' for an existing zip directory entry when the passed in argument entryName does not end with a '/' and there is a matching zip directory entry with name entryName + '/' in the zip file. With JDK 9 the name of the ZipEntry instance returned from java.util.zip.ZipFile.getEntry() always ends with '/' for any zip directory entry.
An ArrayIndexOutOfBoundsException
will be thrown in java.util.jar.JarFile
if the Java run-time encounters the backtick (`) character in a JAR file's manifest. This can be worked around by removing backtick characters from the JAR file's manifest.
LogRecord now stores the event time in the form of a java.time.Instant. XMLFormatter DTD
is upgraded to print the new higher time resolution.
In Java SE 9 java.util.logging
is updated to use java.time
instead of System.currentTimeMillis()
and java.util.Date
. This allows for higher time stamp precision in LogRecord
.
As a consequence, the implementation of the methods getMillis()
and setMillis(long)
in java.util.logging.LogRecord
has been changed to use java.lang.Instant
, and the method setMillis(long)
has been deprecated in favor of the new method LogRecord.setInstant(java.time.Instant)
. The java.util.logging.SimpleFormatter
has been updated to pass a java.time.ZonedDateTime
object instead of java.util.Date
to String.format
. The java.util.logging.XMLFormatter
has been updated to print a new optional <nanos>
XML element after the <millis>
element. The <nanos>
element contains a nano seconds adjustment to the number of milliseconds printed in the <millis>
element. The XMLFormatter
will also print the full java.time.Instant
in the <date>
field, using the java.time.format.DateTimeFormatter.ISO_INSTANT
formatter.
Compatibility with previous releases:
The LogRecord
serial form, while remaining fully backward/forward compatible, now contains an additional serial nanoAdjustment
field of type int, which corresponds to a nano seconds adjustment to the number of milliseconds contained in the serial millis
field. If a LogRecord
is serialized and transmitted to an application running on a previous release of the JDK, the application will simply see a LogRecord
with a time truncated at the millisecond resolution. Similarly, if a LogRecord
serialized by an application running on a previous release of the JDK, is transmitted to an application running on Java SE 9 or later, only the millisecond resolution will be available.
Applications that parse logs produced by the XMLFormatter
, and which perform validation, may need to be upgraded with the newer version of the logger.dtd, available in the appendix A of the Logging Overview. In order to mitigate the compatibilty risks, the XMLFormatter
class (and subclasses) can be configured to revert to the old XML format from Java SE 8 and before. See the java.util.logging.XMLFormatter
API documentation for more details.
There could also be an issue if a subclass of LogRecord
overrides getMillis/setMillis
without calling the implementation of the super class. In that case, the event time as seen by the formatters and other classes may be wrong, as these have been updated to no longer call getMillis()
but use getInstant()
instead.
LogManager.readConfiguration
calls Properties.load
, which may throw IllegalArgumentException
if it encounters an invalid unicode escape sequence in the input stream. In previous versions of the JDK, the IllegalArgumentException was simply propagated to the caller. This was however in violation of the specification, since LogManager.readConfiguration
is not specified to throw IllegalArgumentException
. Instead, it is specified to throw IOException
''if there are problems reading from the stream''. In Java SE 9, LogManager.readConfiguration
will no longer propagate such IllegalArgumentException
directly, but will wrap it inside an IOException
in order to conform to the specification.
A new "java.util.logging.FileHandler.maxLocks" configurable property is added to java.util.logging.FileHandler.
This new logging property can be defined in the logging configuration file and makes it possible to configure the maximum number of concurrent log file locks a FileHandler can handle. The default value is 100.
In a highly concurrent environment where multiple (more than 101) standalone client applications are using the JDK Logging API with FileHandler simultaneously, it may happen that the default limit of 100 is reached, resulting in a failure to acquire FileHandler file locks and causing an IO Exception to be thrown. In such a case, the new logging property can be used to increase the maximum number of locks before deploying the application.
If not overridden, the default value of maxLocks (100) remains unchanged. See java.util.logging.LogManager and java.util.logging.FileHandler API documentation for more details.
When a logger has a handler configured in the logging configuration file (using the <logger>.handlers
property), a reference to that logger will be internally kept by the LogManager until LogManager.reset()
is called, in order to ensure that the associated handlers are properly closed on reset. As a consequence, such loggers won't be garbage collected until LogManager.reset()
is called. An application that needs to allow garbage collection of these loggers before reset is called can revert to the old behaviour by additionally specifying <logger>.handlers.ensureCloseOnReset=false
in the logging configuration file. Note however that doing so will reintroduce the resource leak that JDK-8060132 is fixing. Such an application must therefore take the responsibility of keeping the logger alive as long as it is needed, and close any handler attached to it before the logger gets garbage collected. See LogManager API documentation for more details.
A new JDK implementation specific system property jdk.internal.FileHandlerLogging.maxLocks
has been introduced to control the java.util.logging.FileHandler
MAX_LOCKS limit. The default value of the current MAX_LOCKS (100) is retained if this new system property is not set or an invalid value is provided to the property. Valid values for this property are integers ranging from 1 to Integer.MAX_VALUE-1.
java.util.logging.Formatter.formatMessage
API specification specified that MessageFormat
would be called if the message string contained "{0". In practice MessageFormat
was called if the message string contained either "{0", "{1", "{2" or "{3".
In Java SE 9, the specification and implementation of this method have been changed to call MessageFormat
if the message string contains "{<digit>", where <digit> is in [0..9].
In practice, this should be transparent for calling applications.
The only case where an application might see a behaviour change is if the application passes a format string that does not contain any formatter of the form "{0", "{1", "{2" or "{3", but contains "{<digit>" with <digit> within [4..9], along with an array of parameters that contains at least <digit>+1 elements, and depends on MessageFormat
not to be called. In that case the method will return a formatted message instead of the format string.
In java.util.regex.Pattern
using a character class of the form [^a-b[c-d]]
, the negation ^
negates the entire class, not just the first range. The negation operator "^" has the lowest precedence among the character class operators, intersection "&&", union, range "-" and nested class "[ ]", so it is always applied last.
Previously, the negation was applied only to the first range or group leading to inconsistent and misunderstood matches. Detail and examples in the issue and http://mail.openjdk.java.net/pipermail/core-libs-dev/2011-June/006957.html.
Pattern.compile(String, int) will throw IllegalArgumentException if anything other than a combination of predefined values is passed as the second argument, in accordance with the specification.
The Arrays.asList()
API returns an instance of List
. Calling the toArray()
method on that List
instance is specified always to return Object[]
, that is, an array of Object
. In previous releases, it would sometimes return an array of some subtype. Note that the declared return type of Collection.toArray()
is Object[]
, which permits an instance of an array of a subtype to be returned. The specification wording, however, clearly requires an array of Object
to be returned.
The toArray()
method has been changed to conform to the specification, and it now always returns Object[]
. This may cause code that was expecting the old behavior to fail with a ClassCastException
. An example of code that worked in previous releases but that now fails is the following:
List<String> list = Arrays.asList("a", "b", "c");
String[] array = (String[]) list.toArray();
If this problem occurs, rewrite the code to use the one-arg form toArray(T[])
, and provide an instance of the desired array type. This will also eliminate the need for a cast.
String[] array = list.toArray(new String[0]);
Before the JDK 9 release, invocation of the method Collections.asLifoQueue with a null argument value would not throw a NullPointerException as specified by the class documentation. Instead a NullPointerException would be thrown when operating on the returned Queue. The JDK 9 release corrects the implementation of Collections.asLifoQueue to conform to the specification. Behavioral compatibility is not preserved but it is expected that the impact will be minimal given analysis of existing usages.
Previously the default implementation of List.spliterator
derived a Spliterator
from the List
's iterator, which is poorly splitting and that affects the performance of a parallel stream returned by List.parallelStream
. The default implementation of List.spliterator
now returns an optimal splitting Spliterator
implementation for List
implementations that implement java.util.RandomAccess
. As a result parallel stream performance may be improved for third-party List
implementations, such as those provided by Eclipse collections, that do not override List.spliterator
for compatibility across multiple major versions of the Java platform. This enhancement is a trade-off. It requires that the List.get
method, of such lists implementing RandomAccess
, have no side-effects, ensuring safe concurrent execution of the method when parallel stream pipeline is executed.
The locale data based on the Unicode Consortium's CLDR (Common Locale Data Registry) has been upgraded in JDK 9 to release release 29. See http://cldr.unicode.org/index/downloads/cldr-29 for more detail.
Prior to JDK 9, SPI implementations of java.awt.im.spi, java.text.spi, and java.util.spi packages used the Java Extension Mechanism. In JDK 9, this mechanism has been removed. SPI implementations should now be deployed on the application class path or as module on the module path.
In releases through JDK 8, SPI implementations of java.util.spi.ResourceBundleControlProvider were loaded using Java Extension Mechanism. In JDK 9, this mechanism is no longer available. Instead, SPI implementations may be placed on an application's class path.
The default locale data provider lookup does not load SPI based locale sensitive services. If it is needed, the system property "java.locale.providers" needs to designate "SPI" explicitly. For more detail, refer to LocaleServiceProvider.
Remote class loading via JNDI object factories stored in naming and directory services, is disabled by default. To enable remote class loading by the RMI Registry or COS Naming service provider, set the following system property to the string "true", as appropriate:
com.sun.jndi.rmi.object.trustURLCodebase
com.sun.jndi.cosnaming.object.trustURLCodebase
The javax.naming.CompoundName
, an extensible type, has a protected member, impl
whose type, javax.naming.NameImpl
, is package-private. This is a long standing issue where an inaccessible implementation type has mistakenly made its way into the public Java SE API.
The new javac
lint option javac -Xlint
helped identify this issue. In Java SE 9, this protected member has been removed from the public API.
Since the type of the member is package-private it cannot be directly referenced by non-JDK code. The member type does not implement or extend any super type directly, therefore any non-JDK subtype of javax.naming.CompoundName
could only refer to this member as Object. It is possible that such a subtype might invoke the toString
, or any of Object
's methods on this member, or even synchronize on it. In such a case such subtypes of javax.naming.CompoundName
will require updating.
Code making a static reference to the member will fail to compile, e.g. error: impl has private access in CompoundName
Previously compiled code executed with JDK 9, accessing the member directly will fail, e.g. java.lang.IllegalAccessError: tried to access field javax.naming.CompoundName.impl from class CompoundName$MyCompoundName
The JDK was throwing a NullPointerException when a non-compliant REFERRAL status result was sent but no referral values were included. With this change, a NamingException with message value of "Illegal encoding: referral is empty" will be thrown in such circumstances. See JDK-8149450 and JDK-8154304 for more details
The JDWP socket connector has been changed to bind to localhost only if no ip address or hostname is specified on the agent command line. A hostname of asterisk (*) may be used to achieve the old behavior which is to bind the JDWP socket connector to all available interfaces; this is not secure and not recommended.
When running a java application with the options "-javaagent:myagent.jar -Djava.system.classloader=MyClassLoader", myagent.jar is added to the custom system class loader rather than the application class loader.
In addition, the java.lang.instrument package description has a small update making it clear that a custom system class loader needs to define appendToClassPathForInstrumentation in order to load the agent at startup. Before custom system class loaders were required to implement this method only if the agents are started in the live phase (Agent_OnAttach).
In Java SE 9 the java.util.logging.LoggingMXBean
interface is deprecated in favor of the java.lang.management.PlatformLoggingMXBean
interface. The java.util.logging.LogManager.getLoggingMXBean()
method is also deprecated in favor of java.lang.mangement.ManagementFactory.getPlatformMXBean(PlatformLoggingMXBean.class)
.
The concrete implementation of the logging MXBean registered in the MBeanServer and obtained from the ManagementFactory will only implement java.lang.management.PlatformLoggingMXBean
, and no longer java.util.logging.LoggingMXBean
. It must be noted that PlatformLoggingMXBean
and LoggingMXBean
attributes are exactly the same. The PlatformLoggingMXBean
interface has all the methods defined in LoggingMXBean
, and so PlatformLoggingMXBean
by itself provides the full management capability of logging facility.
This should be mostly transparent to remote and local clients of the API.
Compatibility:
Calls to ManagementFactory.newPlatformMXBeanProxy(MBeanServerConnection, ObjectName, java.util.logging.LoggingMXBean.class)
and calls to JMX.newMXBeanProxy(MBeanServerConnection, ObjectName, java.util.logging.LoggingMXBean.class)
will continue to work as before.
Remote clients running any version of the JDK should see no changes, except for the interface name in MBeanInfo
, and the change in isInstanceOf
reported in 1. and 2. below.
The behavioral change and source incompatibility due to this change are as follows:
ManagementFactory.getPlatformMBeanServer().isInstanceOf(ObjectName, "java.util.logging.LoggingMXBean")
will now return 'false
' instead of 'true
'.
If an application depends on this, then a workaround is to change the source of the calling code to check for java.lang.management.PlatformLoggingMXBean
instead.
The Logging MXBean MBeanInfo
will now report that its management interface is java.lang.management.PlatformLoggingMXBean
instead of the non standard sun.management.ManagementFactoryHelper$LoggingMXBean
name it used to display.
The new behavior has the advantage that the reported interface name is now a standard class.
Local clients which obtain an instance of the logging MXBean by calling ManagementFactory.getPlatformMXBean(PlatformLoggingMXBean.class)
will no longer be able to cast the result to java.util.logging.LoggingMXBean
.
PlatformLoggingMXBean
already has all the methods defined in LoggingMXBean
, therefore a simple workaround is to change the code to accept PlatformLoggingMXBean
instead - or change it to use the deprecated LogManager.getLoggingMXBean()
instead.
com.sun.management.HotSpotDiagnostic::dumpHeap API is modified to throw IllegalArgumentException if the supplied file name does not end with “.hprof” suffix. Existing applications which do not provide a file name ending with the “.hprof” extension will fail with IllegalArgumentException. In that case, applications can either choose to handle the exception or restore old behaviour by setting system property 'jdk.management.heapdump.allowAnyFileSuffix' to true.
A new annotation @javax.management.ConstructorParameters in the java.management module is introduced.
The newly introduced annotation will be 1:1 copy of @java.beans.ConstructorProperties. Constructors annotated by @java.beans.ConstructorProperties will still be recognized and processed.
In case a constructor is annotated by both @javax.management.ConstructorParameters and @java.beans.ConstructorProperties only the @javax.management.ConstructorParameters will be used.
JMX ObjectName class is refactored and 8 bytes of class member metadata was reduced.
Each instance size of JMX ObjectName in memory is 8 bytes less than JDK8 ObjectName instance.
A new restriction on domain name length is introduced. The domain name now is a case sensitive string of limited length. The domain name length limit is Integer.MAX_VALUE/4.
The Javadoc Standard Doclet documentation has been enhanced to specify that it doesn't validate the content of documentation comments for conformance, nor does it attempt to correct any errors in documentation comments. See the Conformance section in the Doclet documentation.
The implementation of Attach API has changed in JDK 9 to disallow attaching to the current VM by default. This change should have no impact on tools that use the Attach API to attach to a running VM. It may impact libraries that mis-use this API as a way to get at the java.lang.instrument API. The system property jdk.attach.allowAttachSelf
may be set on the command line to mitigate any compatibility with this change.
A warning has been added to the plugin authentication dialog in cases where HTTP Basic authentication (credentials are sent unencrypted) is used while using a proxy or while not using SSL/TLS protocols:
"WARNING: Basic authentication scheme will effectively transmit your credentials in clear text. Do you really want to do this?"
JDK 9 no longer contains samples, including the JnlpDownloadServlet. If you need to use the JnlpDownloadServlet, you can get it from the latest update of JDK 8.
The Deployment Toolkit API installLatestJRE() and installJRE(requestedVersion) methods from deployJava.js and install() method from dtjava.js no longer installs the JRE. If a user's version of Java is below the security baseline, it redirects the user to java.com to get an updated JRE.
Starting with JDK 9, support for deployment technologies designed to access Java Applications through a web browser is limited to client and development platforms. Use of deployment technologies on server platforms such as Oracle Linux, Suse Linux, Windows Server 2016, and others is not supported. See the (link to complete list)JDK 9 and JRE 9 Certified System Configurations(link) page for a complete list.
JDK-8080977 introduced delay on applet launch, the delay appears only on IE and lasts about 20 seconds. JDK-8136759 removed this delay.
Documentation for the Java Packager states that the -srcfiles argument is not mandatory, and if omitted all files in the directory specified by the -srcdir argument will be used. This is not functioning as expected. When -srcfiles is omitted, the resultant bundle may issue a class not found error.
New option "Use roaming profile" added in JCP (Windows only).
When the option is set, the following data is stored in the roaming profile:
The rest of the cache ( the cache without LAP), temp and log folders are always stored in LocalLow regardless of the roaming profile settings.
Web-start applications cannot be launched when clicking JNLP link from IE 11 on Windows 10 Creators Update when 64-bit JRE is installed. Workaround is to uninstall 64-bit JRE and use only 32-bit JRE.
Both jcontrol and javaws -viewer do not work on Oracle Linux 6. Java Control Panel functionality is dependent on JavaFX technology, which is not supported on Oracle Linux 6 in the JDK 9 release. Users reliant on the Java Control Panel are encouraged to use the most up-to-date JDK 8 release.
JavaFX applications deployed with
<application-desc type="JavaFX"> <param name="param1" value="foo"/> </application-desc>
will have their <param> elements ignored. It is recommended that JavaFX applications relying on parameter values continue to use the <javafx-desc> element of the xml extension until this is resolved.
In 8u20, the custom XML parser that was used in Java Web Start to parse jnlp file was replaced with the standard SAX parser. When a parsing error occurred, the code would print a warning message to the Java Console and Trace file, and then try again using the custom XML parser. In JDK 9 this fallback has been removed. If the jnlp file cannot be parsed by the SAX parser an error dialog will show and the app will not run. This could cause compatibility errors with existing JNLP files that don't follow the XML rules that are enforced by the SAX parser.
New-style JVM arguments, those with embedded spaces (e.g., "--add-modules <module>" and "--add-exports <module>" instead of, "--add-modules=<module>" and "--add-exports=<module>") will not be supported when passed through Java Web Start or Plug-in. If arguments with embedded spaces are passed, they could be processed incorrectly.
In JDK 9, Java Web Start applications are prohibited from using URLStreamHandlerFactory
. Using URLStreamHandlerFactor
y via javaws
will result in an exception with the message "factory already defined."
Applications launched directly with java
command are not impacted.
JDK 9 will support code generation for AVX-512 (AVX3) instructions set on x86 CPUs, but not by default. A maximum of AVX2 is supported by default in JDK 9. The flag -XX:UseAVX=3 can be used to enable AVX-512 code generation on CPUs that support it.
The 32-bit Client VM was removed from linux-x86 and Windows. As a result, the -client
flag is ignored with 32-bit versions of Java on this platform. The 32-bit Server VM is used instead. However, due to limited virtual address space on Windows in 32-bit mode, by default the Server VM emulates the behavior of the Client VM and only uses the C1 JIT compiler, Serial GC, 32Mb CodeCache. To revert to server mode, the flag -XX:{+|-}TieredCompilation
can be used. On linux-x86 there is no Client VM mode emulation.
When performing OSR on loops with huge stride and/or initial values, in very rare cases, the tiered/server compilers could produce non-canonical loop shapes that produce nondeterministic answers when the answers should be deterministic. This issue has now been fixed.
In 8u40, and 7u80, a new feature was introduced to use the PICL library on Solaris to get some system information. If this library was not found, we printed an error message:
Java HotSpot(TM) Server VM warning: PICL (libpicl.so.1) is missing. Performance will not be optimal.
This warning was misleading. Not finding the PICL library is a very minor issue, and the warnings mostly lead to confusion. In this release, the warning was removed.
According to the Java VM Specification, final fields can be modified by the putfield
byte code instruction only if the instruction appears in the instance initializer method <init>
of the field's declaring class. Similar, static final fields can be modified by a putstatic
instruction only if the instruction appears in the class initializer method <clinit>
of the field's declaring class. With the JDK 9 release, the HotSpot VM fully enforces the previously mentioned restrictions, but only for class files with version number >= 53. For class files with version numbers < 53, restrictions are only partially enforced (as it is done by releases preceding JDK 9). That is, for class files with version number < 53, final fields can be modified in any method of the class declaring the field (not only class/instance initializers).
We have implemented improvements that will improve performance of several security algorithms, especially when using ciphers with key lengths of 2048-bit or greater. To turn on these improvements, use the options -XX:+UseMontgomeryMultiplyIntrinsic and -XX:+UseMontgomerySquareIntrinsic. This improvement is only for Linux and Solaris on x86_64 architecture.
The IEEE 754 standard distinguishes between signaling and quiet NaNs. When executing floating point operations, some processors silently convert signaling NaNs to quiet NaNs. The 32-bit x86 version of the HotSpot JVM allows silent conversions to happen. With JVM releases preceding JDK 9, silent conversions happen depending on whether the floating point operations are part of compiled or interpreted code. With the JDK 9 release, interpreted and compiled code behaves consistently with respect to signaling and quiet NaNs.
This enhancement provides a way to specify more granular levels for the GC verification enabled using the "VerifyBeforeGC", "VerifyAfterGC" and "VerifyDuringGC" diagnostic options. It introduces a new diagnostic option VerifySubSet using which one can specify the subset of the memory system that should be verified.
With this new option, one or more sub-systems can be specified in a comma separated string. Valid memory sub-systems are: threads, heap, symbol_table, string_table, codecache, dictionary, classloader_data_graph, metaspace, jni_handles, c-heap and codecache_oops.
During the GC verification, only the sub-systems specified using VerifySubSet get verified:
D:\tests>java -XX:+UnlockDiagnosticVMOptions -XX:+VerifyBeforeGC -XX:VerifySubSet="threads,c-heap" -Xlog:gc+verify=debug Test
[0.095s][debug ][gc,verify] Threads
[0.099s][debug ][gc,verify] C-heap
[0.105s][info ][gc,verify] Verifying Before GC (0.095s, 0.105s) 10.751ms
[0.120s][debug ][gc,verify] Threads
[0.124s][debug ][gc,verify] C-heap
[0.130s][info ][gc,verify] Verifying Before GC (0.120s, 0.130s) 9.951ms
[0.148s][debug ][gc,verify] Threads
[0.152s][debug ][gc,verify] C-heap
If any invalid memory sub-systems are specified with VerifySubSet, Java process exits with the following error message:
D:\tests>java -XX:+UnlockDiagnosticVMOptions -XX:+VerifyBeforeGC -XX:VerifySubSet="threads,c-heap,hello" -Xlog:gc+verify=debug oom
Error occurred during initialization of VM
VerifySubSet: 'hello' memory sub-system is unknown, please correct it
The logging for all garbage collectors in HotSpot have been changed to make use of a new logging framework that is configured through the -Xlog
command line option. The command line flags -XX:+PrintGC, -XX:+PrintGCDetails
and -Xloggc
have been deprecated and will likely be removed in a future release. They are currently mapped to similar -Xlog
configurations. All other flags that were used to control garbage collection logging have been removed. See the documentation for -Xlog
for details on how to now configure and control the logging. These are the flags that were removed:
CMSDumpAtPromotionFailure
, CMSPrintEdenSurvivorChunks
, G1LogLevel
, G1PrintHeapRegions
, G1PrintRegionLivenessInfo
, G1SummarizeConcMark
, G1SummarizeRSetStats
, G1TraceConcRefinement
, G1TraceEagerReclaimHumongousObjects
, G1TraceStringSymbolTableScrubbing
, GCLogFileSize
, NumberOfGCLogFiles
, PrintAdaptiveSizePolicy
, PrintClassHistogramAfterFullGC
, PrintClassHistogramBeforeFullGC
, PrintCMSInitiationStatistics
, PrintCMSStatistics
, PrintFLSCensus
, PrintFLSStatistics
, PrintGCApplicationConcurrentTime
, PrintGCApplicationStoppedTime
, PrintGCCause
, PrintGCDateStamps
, PrintGCID
, PrintGCTaskTimeStamps
, PrintGCTimeStamps
, PrintHeapAtGC
, PrintHeapAtGCExtended
, PrintJNIGCStalls
, PrintOldPLAB
, PrintParallelOldGCPhaseTimes
, PrintPLAB
, PrintPromotionFailure
, PrintReferenceGC
, PrintStringDeduplicationStatistics
, PrintTaskqueue
, PrintTenuringDistribution
, PrintTerminationStats
, PrintTLAB
, TraceDynamicGCThreads
, TraceMetadataHumongousAllocation
, UseGCLogFileRotation
, VerifySilently
On Linux kernels 2.6 and later, the JDK would include time spent waiting for IO completion as "CPU usage". During periods of heavy IO activity, this could result in misleadingly high values reported as CPU consumption in various tools like Flight Recorder and performance counters. This issue has been resolved.
Some linux kernel versions (including, but not limited to 3.13.0-121-generic and 4.4.0-81-generic) are known to contain an incorrect fix for a linux kernel stack overflow issue (See CVE-2017-1000364). The incorrect fix can trigger crashes in the Java Virtual Machine. Upgrading the kernel to a version that includes the corrected fix addresses the problem.
This change enforces the unqualified name format checks for NameAndType
strings as outlined in the JVM specification sections 4.4.6 and 4.2.2, meaning that some illegal names and descriptors that users may be utilizing in their classfiles will now be caught with a Class Format Error. This includes format checking for all strings under non-referenced NameAndType
's. Users will see a change if they (A) are using Java classfile version 6 or below and have an illegal NameAndType descriptor with no Methodref or Fieldref reference to it; or (B) are using any Java classfile version and have an illegal NameAndType name with no Methodref or Fieldref reference to it.
In both (A) and (B) the users will now receive a ClassFormatError for those illegal strings, which is an enforcement of unqualified name formats as delineated in JVMS 4.2.2.
The current version of the Java Native Interface (JNI) needs to be updated due to the addition of new application programmatic interfaces to support Jigsaw. JNI_VERSION_9
was added with a value of 0x00090000 to the available versions and CurrentVersion
was changed to this new value.
The JVM has been fixed to check that the constant pool types JVM_CONSTANT_Methodref or JVM_CONSTANT_InterfaceMethodref are consistent with the type of method referenced. These checks are made during method resolution and are also checked for methods that are referenced by JVM_CONSTANT_MethodHandle.
If consistency checks fail an IncompatibleClassChangeError is thrown.
javac has never generated inconsistent constant pool entries, but some bytecode generating software may. In many cases, if ASM is embedded in the application, upgrading to the latest version ASM 5.1 resolves the exception. After upgrading ASM, be sure to replace all uses of deprecated functions with calls to the new functions, particularly new functions that pass a boolean whether the method is an interface method: visitMethodInsn and Handle.
JDK 8 and below offered a client JVM and a server JVM for Windows 32-bit systems with the default being the client JVM. JDK 9 will offer only the server JVM.
The server JVM has better performance although it might require more resources. The change is made to reduce complexity and to benefit from the increased capabilities of computers.
The JNI function DetachCurrentThread
has been added to the list of JNI functions that can safely be called with an exception pending. The HotSpot Virtual Machine has always supported this as it reports that the exception occurred in a similar manner to the default handling of uncaught exceptions at the Java level. Other implementations are not obligated to do anything with the pending exception.
The VM Options "-Xoss", "-Xsqnopause", "-Xoptimize" and "-Xboundthreads" are obsolete in JDK 9 and are ignored. Use of these options will result in a warning being issued in JDK 9 and they may be removed completely in a future release.
The VM Options "-Xoss", "-Xsqnopause", "-Xoptimize" options were already silently ignored for a long time.
The VM Option "-Xboundthreads" was only needed on Solaris 8/9 (when using the T1 threading library).
The -XX:-JNIDetachReleasesMonitors
flag requested that the VM run in a pre-JDK 6 compatibility mode with regard to not releasing monitors when a JNI attached thread detaches. This option is obsolete in JDK 9, and is ignored, as the VM always conforms to the JNI Specification and releases monitors. Use of this option will result in a warning being issued in JDK 9 and it may be removed completely in a future release.
The VM Options -XX:AdaptiveSizePausePolicy
and -XX:ParallelGCRetainPLAB
are obsolete in JDK 9 and are ignored. Use of these options will result in a warning being issued in JDK 9 and they may be removed completely in a future release.
The VM Option -XX:AdaptiveSizePausePolicy
has been unused for some time.
The VM Option -XX:ParallelGCRetainPLAB
was a diagnostic flag relating to garbage collector combinations that no longer exist.
When a large TLS (Thread local storage) size is set for Threads, the JVM results in a stack overflow exception. The reason for this behavior is that the reaper thread was created with a low stack size of 32768k. When a large TLS size is set, it steals space from the threads stack, which eventually results in a stack overflow. This is a known glibc bug. To overcome this issue, we have introduced a workaround (jdk.lang.processReaperUseDefaultStackSize) in which the user can set the reaper threads stack size to a default instead of to 32768. This gives the reaper thread a bigger stack size, so for a large TLS size, such as 32k, the process will not fail. Users can set this flag in one of two ways:
The problem has been observed only when JVM is started from JNI code in which TLS is declared using "__thread"
When dumping the heap in binary format, HPROF format 1.0.2 is always used now. Previously, format 1.0.1 was used for heaps smaller than 2GB. HPROF format 1.0.2 is also used by jhsdb jmap for the serviceability agent.
The jsadebugd command to start remote debug server can now be launched from the common SA launcher jhsdb .
The new command to start remote debug server is jhsdb debugd .
The Java runtime now uses system zlib library (the zlib library installed on the underlying operation system) for its zlib compression support (the deflation and inflation functionality in java.util.zip, for example) on Solaris and Linux platforms.
release
file removed
OS_VERSION
property is no longer present in the release
file. Scripts or tools that read the release
file may need to be updated to handle this change.
The REMOVEOUTOFDATEJRES feature does not work when the install is run via LocalSystem user. LocalSystem user is not a domain user, therefore it does not have network access to get the list of out of date jre's.
Users running Internet Explorer Enhance Security Configuration (ESC) on Windows Server 2008 R2 may have experienced issues installing Java in interactive mode. This issue has been resolved in the 8u71 release. Installers executed in interactive mode will no longer appear to be stalled on ESC configurations.
Demos were removed from the package tar.Z bundle(JDK-7066713). There is a separate Demos&Samples bundle beginning with 7u2 b08 and 6u32 b04, but Solaris patches still contain SUNWj7dmo/SUNWj6dmo. The 64 bit packages are SUNWj7dmx/SUNWj6dmx.
Demo packages remain in the existing Solaris patches; however, just because they are there doesn't mean that they are installed. They will be patched only if the end user has them installed on the system.
http://docs.oracle.com/javase/7/docs/webnotes/install/solaris/solaris-jdk.html
The link above is to the Solaris OS Install Directions for the JDK. The SUNWj7dmx package is mentioned in the tar.Z portion of the directions. This is confusing to some as, according to the cited bug, the SUNWj7dmx package shouldn't be part of the tar.Z bundle.
Starting with the JDK 9 release, a Stage on Mac and Linux platforms will be initially filled using the Fill property of the Scene if its Fill is a Color. An average color, computed within the stops range, will be used if the Fill is a LinearGradient or RadialGradient. Previously, it was initially filled with WHITE, irrespective of the Fill in the Scene. This change in behavior will reduce the flashing that can be seen with a dark Scene background, but applications should be aware of this change in behavior so they can set an appropriate Fill color for their Scene.
The bug fix for JDK-8089861, which was first integrated in JDK 8u102, fixes a memory leak when Java objects are passed into JavaScript. Prior to JDK 8u102, the WebView JavaScript runtime held a strong reference to such bound objects, which prevented them from being garbage collected. After the fix for JDK-8089861, the WebView JavaScript runtime uses weak references to refer to bound Java objects. The specification was updated to make it clear that this is the intended behavior.
Applications which rely on the previously unspecified behavior might be affected by the updated behavior if the application does not hold a strong reference to an object passed to JavaScript. In such case, the Java object might be garbage collected prematurely. The solution is to modify the application to hold a strong reference in Java code for objects that should remain live after being passed into JavaScript.
The javax.rmi.CORBA.Util class provides methods that can be used by stubs and ties to perform common operations. It also acts as a factory for ValueHandlers. The javax.rmi.CORBA.ValueHandler interface provides services to support the reading and writing of value types to GIOP streams. The security awareness of these utilities has been enhanced with the introduction of a permission java.io.SerializablePermission("enableCustomValueHanlder"). This is used to establish a trust relationship between the users of the javax.rmi.CORBA.Util and javax.rmi.CORBA.ValueHandler APIs.
The required permission is "enableCustomValueHanlder" SerializablePermission. Third party code running with a SecurityManager installed, but not having the new permission while invoking Util.createValueHandler(), will fail with an AccessControlException.
This permission check behaviour can be overridden, in JDK8u and previous releases, by defining a system property, "jdk.rmi.CORBA.allowCustomValueHandler".
As such, external applications that explicitly call javax.rmi.CORBA.Util.createValueHandler require a configuration change to function when a SecurityManager is installed and neither of the following two requirements is met:
The java.io.SerializablePermission("enableCustomValueHanlder") is not granted by SecurityManager.
In the case of applications running on JDK8u and before, the system property "jdk.rmi.CORBA.allowCustomValueHandler" is either not defined or is defined equal to "false" (case insensitive).
==== Please note that "enableCustomValueHanlder" typo will be corrected in the October 2016 releases. In those and future JDK releases "enableCustomValueHandler" will be the correct SerializationPermission to use.
If the singleton ORB is configured with the system property org.omg.CORBA.ORBSingletonClass
or the equivalent key in orb.properties
then the class must be visible to the system class loader. Previous releases incorrectly attempted to load the class using the Thread Context Class Loader (TCCL). An @implNote
has been added to org.omg.CORBA.ORB
to document the behavior.
The change does not impact the loading of ORB implementations configured with the system property org.omg.CORBA.ORBClass
or the equivalent key in orb.properties
. The ORB implementation configured with this property is loaded using the TCCL to allow for applications that bundle an ORB implementation with the application.
orb.idl
and ir.idl
have moved from the JDK lib
directory to the include
directory. Applications that use a CORBA IDL compiler in their build may need to change the include path from $JAVA_HOME/lib
to $JAVA_HOME/include
.
org.omg.CORBA.ORB specifies the search order to locate an ORB's orb.properties file, and this includes searching ${java.home}/lib. The JDK 9 release will include a ${java.home}/conf directory as the location for properties files. As such, the ORB.init processing has been amended, to include ${java.home}/conf directory in its search path for an orb.properties file. Thus, the preferred approach is to use the ${java.home}/conf directory, in preference to the ${java.home}/lib directory, as a location for an orb.properties file.
With one exception, keytool will always print a warning if the certificate, certificate request, or CRL it is parsing, verifying, or generating is using a weak algorithm or key. When a certificate is from an existing TrustedCertificateEntry
, either in the keystore directly operated on or in the cacerts
keystore when the -trustcacerts
option is specified for the -importcert
command, keytool will not print a warning if it is signed with a weak signature algorithm. For example, suppose the file cert
contains a CA certificate signed with a weak signature algorithm, keytool -printcert -file cert
and keytool -importcert -file cert -alias ca -keystore ks
will print out a warning, but after the last command imports it into the keystore, keytool -list -alias ca -keystore ks
will not show a warning anymore.
An algorithm or a key is weak if it matches the value of the jdk.certpath.disabledAlgorithms
security property defined in the conf/security/java.security
file.
One new root certificate has been added:
ISRG Root X1
alias: letsencryptisrgx1
DN: CN=ISRG Root X1, O=Internet Security Research Group, C=US
Classes loaded from the extensions directory are no longer granted AllPermission
by default. See JDK-8040059.
A custom java.security.Policy
provider that was using the extensions mechanism may be depending on the policy grant statement that had previously granted it AllPermission
. If the policy provider does anything that requires a permission check, the local policy file may need to be adjusted to grant those permissions.
Also, custom policy providers are loaded by the system class loader. The classpath
may need to be configured to allow the provider to be located.
When using a SecurityManager
, the permissions required by JDK modules are granted by default, and are not dependent on the policy.url
properties that are set in the java.security
file.
This also applies if you are setting the java.security.policy
system property with either the '=' or '==' option.
Two new root certificates have been added :
D-TRUST Root Class 3 CA 2 2009 alias: dtrustclass3ca2 DN: CN=D-TRUST Root Class 3 CA 2 2009, O=D-Trust GmbH, C=DE
D-TRUST Root Class 3 CA 2 EV 2009 alias: dtrustclass3ca2ev DN: CN=D-TRUST Root Class 3 CA 2 EV 2009, O=D-Trust GmbH, C=DE
Three new root certificates have been added :
IdenTrust Public Sector Root CA 1 alias: identrustpublicca DN: CN=IdenTrust Public Sector Root CA 1, O=IdenTrust, C=US
IdenTrust Commercial Root CA 1 alias: identrustcommercial DN: CN=IdenTrust Commercial Root CA 1, O=IdenTrust, C=US
IdenTrust DST Root CA X3 alias: identrustdstx3 DN: CN=DST Root CA X3, O=Digital Signature Trust Co.
This JDK release introduces a new restriction on how MD5 signed JAR files are verified. If the signed JAR file uses MD5, signature verification operations will ignore the signature and treat the JAR as if it were unsigned. This can potentially occur in the following types of applications that use signed JAR files:
The list of disabled algorithms is controlled via the security property, jdk.jar.disabledAlgorithms, in the java.security file. This property contains a list of disabled algorithms and key sizes for cryptographically signed JAR files.
To check if a weak algorithm or key was used to sign a JAR file, one can use the jarsigner binary that ships with this JDK. Running jarsigner -verify on a JAR file signed with a weak algorithm or key will print more information about the disabled algorithm or key.
For example, to check a JAR file named test.jar, use the following command : jarsigner -verify test.jar
If the file in this example was signed with a weak signature algorithm like MD5withRSA, this output would be displayed:
"The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled. Re-run jarsigner with the -verbose option for more details."
More details can be seen with the verbose option: jarsigner -verify -verbose test.jar
The following output would be displayed:
- Signed by "CN=weak_signer"
Digest algorithm: MD5 (weak)
Signature algorithm: MD5withRSA (weak), 512-bit key (weak)
Timestamped by "CN=strong_tsa" on Mon Sep 26 08:59:39 CST 2016
Timestamp digest algorithm: SHA-256
Timestamp signature algorithm: SHA256withRSA, 2048-bit key
To address the issue, the JAR file will need to be re-signed with a stronger algorithm or key size. Alternatively, the restrictions can be reverted by removing the applicable weak algorithms or key sizes from the jdk.jar.disabledAlgorithms security property; however, this option is not recommended. Before re-signing affected JARs, the existing signature(s) should be removed from the JAR file. This can be done with the zip utility, as follows:
zip -d test.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*.DSA'
Please periodically check the Oracle JRE and JDK Cryptographic Roadmap at http://java.com/cryptoroadmap for planned restrictions to signed JARs and other security components.
The OpenJDK 9 binary for Linux x64 contains an empty cacerts
keystore. This prevents TLS connections from being established because there are no Trusted Root Certificate Authorities installed. You may see an exception like the following:
javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
As a workaround, users can set the javax.net.ssl.trustStore
System Property to use a different keystore. For example, the ca-certificates
package on Oracle Linux 7 contains the set of Root CA certificates chosen by the Mozilla Foundation for use with the Internet PKI. This package installs a trust store at /etc/pki/java/cacerts
, which can be used by OpenJDK 9.
Only the OpenJDK 64 bit Linux download is impacted. This issue does not apply to any Oracle JRE/JDK download.
Progress on open-sourcing the Oracle JDK Root CAs can be tracked through the issue JDK-8189131.
The following have been added to the security algorithm requirements for JDK implementations (keysize in parentheses):
Signature
: SHA256withDSAKeyPairGenerator
: DSA (2048), DiffieHellman (2048, 4096), RSA (4096)AlgorithmParameterGenerator
: DSA (2048), DiffieHellman (2048)Cipher
: AES/GCM/NoPadding (128), AES/GCM/PKCS5Padding (128)SSLContext
: TLSv1.1, TLSv1.2TrustManagerFactory
: PKIX
This JDK release introduces new restrictions on how signed JAR files are verified. If the signed JAR file uses a disabled algorithm or key size less than the minimum length, signature verification operations will ignore the signature and treat the JAR as if it were unsigned. This can potentially occur in the following types of applications that use signed JAR files:
The list of disabled algorithms is controlled via a new security property, jdk.jar.disabledAlgorithms, in the java.security file. This property contains a list of disabled algorithms and key sizes for cryptographically signed JAR files.
The following algorithms and key sizes are restricted in this release:
1. MD2 (in either the digest or signature algorithm)
2. RSA keys less than 1024 bits
NOTE: We are planning to restrict MD5-based signatures in signed JARs in the January 2017 CPU.
To check if a weak algorithm or key was used to sign a JAR file, one can use the jarsigner binary that ships with this JDK. Running jarsigner -verify -J-Djava.security.debug=jar on a JAR file signed with a weak algorithm or key will print more information about the disabled algorithm or key.
e.g. to check a JAR file named test.jar, use this command : jarsigner -verify -J-Djava.security.debug=jar test.jar
If the file in this example was signed with a weak signature algorithm like MD2withRSA, this output would be seen : jar: beginEntry META-INF/my_sig.RSA jar: processEntry: processing block jar: processEntry caught: java.security.SignatureException: Signature check failed. Disabled algorithm used: MD2withRSA jar: done with meta!
The updated jarsigner command will exit with this warning printed to standard output : "Signature not parsable or verifiable. The jar will be treated as unsigned. The jar may have been signed with a weak algorithm that is now disabled. For more information, rerun jarsigner with debug enabled (-J-Djava.security.debug=jar)"
To address the issue, the jar file will need to be re-signed with a stronger algorithm or key size. Alternatively, the restrictions can be reverted by removing the applicable weak algorithms or key sizes from the jdk.jar.disabledAlgorithms security property; however, this option is not recommended. Before re-signing affected JARs, the existing signature(s) should be removed from the JAR. This can be done with the zip utility, as follows:
zip -d test.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*.DSA'
Please periodically check the Oracle JRE and JDK Cryptographic Roadmap at http://java.com/cryptoroadmap for planned restrictions to signed JARs and other security components. In particular, please note the current plan to restrict MD5-based signatures in signed JARs in the January 2017 CPU.
To test if your JARs have been signed with MD5, add "MD5" to the jdk.jar.disabledAlgorithms security property, ex:
jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024
and then run jarsigner -verify -J-Djava.security.debug=jar on your JARs as described above.
DSA keys less than 1024 bits are not strong enough and should be restricted in certification path building and validation. Accordingly, DSA keys less than 1024 bits have been deactivated by default by adding "DSA keySize < 1024" to the "jdk.certpath.disabledAlgorithms" security property. Applications can update this restriction in the security property ("jdk.certpath.disabledAlgorithms") and permit smaller key sizes if really needed (for example, "DSA keySize < 768").
The implementation of the checkPackageAccess
and checkPackageDefinition
methods of java.lang.SecurityManager
now automatically restrict all non-exported packages of JDK modules loaded by the platform class loader or its ancestors. This is in addition to any packages listed in the package.access
and package.definition
security properties. A "non-exported package" refers to a package that is not exported to all modules. Specifically, it refers to a package that either is not exported at all by its containing module or is exported in a qualified fashion by its containing module.
If your application is running with a SecurityManager
, it will need to be granted an appropriate accessClassInPackage.{package} RuntimePermission
to access any internal JDK APIs (in addition to specifying an appropriate --add-exports
option). If the application has not been granted access, a SecurityException
will be thrown.
Note that an upgraded JDK module may have a different set of internal packages than the corresponding system module, and therefore may require a different set of permissions.
The package.access
and package.definition
properties no longer contain internal JDK packages that are not exported. Therefore, if an application calls Security.getProperty("package.access")
, it will not include the builtin non-exported JDK packages.
Also, when running under a SecurityManager
, an attempt to access a type in a restricted package that does not contain any classes now throws a ClassNotFoundException
instead of an AccessControlException
. For example, loading sun.Foo
now throws a ClassNotFoundException
instead of an AccessControlException
because there are no classes in the sun
package.
An error was corrected for PBE using 256-bit AES ciphers such that the derived key may be different and not equivalent to keys previously derived from the same password.
To improve security, the default key size for the RSA and DiffieHellman KeyPairGenerator
implementations and the DiffieHellman AlgorithmParameterGenerator
implementations has been increased from 1024 bits to 2048 bits. The default key size for the DSA KeyPairGenerator
and AlgorithmParameterGenerator
implementations remains at 1024 bits to preserve compatibility with applications that are using keys of that size with the SHA1withDSA signature algorithm.
With increases in computing power and advances in cryptography, the minimum recommended key size increases over time. Therefore, future versions of the platform may increase the default size.
For signature generation, if the security strength of the digest algorithm is weaker than the security strength of the key used to sign the signature (e.g. using (2048, 256)-bit DSA keys with SHA1withDSA signature), the operation will fail with the error message: "The security strength of SHA1 digest algorithm is not sufficient for this key size."
The Comodo "UTN - DATACorp SGC" root CA certificate has been removed from the cacerts file.
As of JDK 9, the default keystore type (format) is "pkcs12" which is based on the RSA PKCS12 Personal Information Exchange Syntax Standard. Previously, the default keystore type was "jks" which is a proprietary format. Other keystore formats are available, such as "jceks" which is an alternate proprietary keystore format with stronger encryption than "jks" and "pkcs11", which is based on the RSA PKCS11 Standard and supports access to cryptographic tokens such as hardware security modules and smartcards.
Due to the more rigorous procedure of reading a keystore content, some keystores (particularly, those created with old versions of the JDK or with a JDK from other vendors) might need to be regenerated.
The following procedure can be used to import the keystore:
Before you start, create a backup of your keystore. For example, if your keystore file is /DIR/KEYSTORE
, make a copy of it:
cp /DIR/KEYSTORE /DIR/KEYSTORE.BK
Download an older release of the JDK, prior CPU17_04, and install it in a separate location. For example: 6u161, 7u151, or 8u141. Suppose, that older JDK is installed in the directory /JDK8U141
Make sure that the keystore can be successfully read with the keytool from that older directory. For example, if the keystore file is located in /DIR/KEYSTORE
, the following command should successfully list its content:
/JDK8U141/bin/keytool -list /DIR/KEYSTORE
Import the keystore. For example:
/JDK8U141/bin/keytool -importkeystore \
-srckeystore /DIR/KEYSTORE \
-srcstoretype JCEKS \
-srcstorepass PASSWORD \
-destkeystore /DIR/KEYSTORE.NEW \
-deststoretype JCEKS \
-deststorepass PASSWORD
Verify that the newly created keystore is correct. At the very least, make sure that the keystore can be read with keytool from a newer JDK:
/NEW_JDK/bin/keytool -list /DIR/KEYSTORE.NEW
After successful verification, replace the old keystore with the new one:
mv /DIR/KEYSTORE.NEW /DIR/KEYSTORE
Keep the backup copy of the keystore at least until you are sure the imported keystore is correct.
A new constraint named 'usage' has been added to the 'jdk.certpath.disabledAlgorithms' security property, that when set, restricts the algorithm if it is used in a certificate chain for the specified usage(s). Three usages are initially supported: 'TLSServer' for restricting authentication of TLS server certificate chains, 'TLSClient' for restricting authentication of TLS client certificate chains, and 'SignedJAR' for restricting certificate chains used with signed JARs. This should be used when disabling an algorithm for all usages is not practical. The usage type follows the keyword and more than one usage type can be specified with a whitespace delimiter. For example, to disable SHA1 for TLS server and client certificate chains, add the following to the property: "SHA1 usage TLSServer TLSClient"
The 'denyAfter' constraint has been added to the 'jdk.jar.disabledAlgorithms' security property. When set, it restricts the specified algorithm if it is used in a signed JAR after the specified date, as follows:
a. if the JAR is not timestamped, it will be restricted (treated as unsigned) after the specified date
b. if the JAR is timestamped, it will not be restricted if it is timestamped before the specified date.
For example, to restrict usage of SHA1 in jar files signed after January 1, 2018, add the following to the property: "SHA1 denyAfter 2018-01-01".
Applications which use static ProtectionDomain objects (created using the 2-arg constructor) with an insufficient set of permissions may now get an AccessControlException with this fix. They should either replace the static ProtectionDomain objects with dynamic ones (using the 4-arg constructor) whose permission set will be expanded by the current Policy or construct the static ProtectionDomain object with all the necessary permissions.
Default signature algorithms for jarsigner
and keytool
are determined by both the algorithm and the key size of the private key which makes use of comparable strengths as defined in Tables 2 and 3 of NIST SP 800-57 Part 1-Rev.4. Specifically, for a DSA or RSA key with a key size greater than 7680 bits, or an EC key with a key size greater than or equal to 512 bits, SHA-512 will be used as the hash function for the signature algorithm. For a DSA or RSA key with a key size greater than 3072 bits, or an EC key with a key size greater than or equal to 384 bits, SHA-384 will be used. Otherwise, SHA-256 will be used. The value may change in the future.
For DSA keys, the default key size for keytool
has changed from 1024 bits to 2048 bits.
There are a few potential compatibility risks associated with these changes:
If you use jarsigner
to sign JARs with the new defaults, previous versions (than this release) might not support the stronger defaults and will not be able to verify the JAR. jarsigner -verify
on such a release will output the following error:
jar is unsigned. (signatures missing or not parsable)
If you add -J-Djava.security.debug=jar
to the jarsigner
command line, the cause will be output:
jar: processEntry caught: java.security.NoSuchAlgorithmException: SHA256withDSA Signature not available
If compatibility with earlier releases is important, you can, at your own risk, use the -sigalg
option of jarsigner
and specify the weaker SHA1withDSA algorithm.
If you use a PKCS11
keystore, the SunPKCS11 provider may not support the SHA256withDSA
algorithm. jarsigner
and some keytool
commands may fail with the following exception if PKCS11
is specified with the -storetype
option, ex:
keytool error: java.security.InvalidKeyException: No installed provider supports this key: sun.security.pkcs11.P11Key$P11PrivateKey
A similar error may occur if you are using NSS with the SunPKCS11 provider. The workaround is to use the -sigalg
option of keytool
and specify SHA1withDSA.
If you have a script that uses the default key size of keytool
to generate a DSA keypair but then subsequently specifies a specific signature algorithm, ex:
keytool -genkeypair -keyalg DSA -keystore keystore -alias mykey ...
keytool -certreq -sigalg SHA1withDSA -keystore keystore -alias mykey ...
it will fail with one of the following exceptions, because the new 2048-bit keysize default is too strong for SHA1withDSA:
keytool error: java.security.InvalidKeyException: The security strength of SHA-1 digest algorithm is not sufficient for this key size
keytool error: java.security.InvalidKeyException: DSA key must be at most 1024 bits
You will see a similar error if you use jarsigner
to sign JARs using the new 2048-bit DSA key with -sigalg SHA1withDSA
set.
The workaround is to remove the -sigalg
option and use the stronger SHA256withDSA default or, at your own risk, use the -keysize
option of keytool
to create new keys of a smaller key size (1024).
See JDK-8057810, JDK-8056174 and JDK-8138766 for more details.
In order to support longer key lengths and stronger signature algorithms, a new JCE Provider Code Signing root certificate authority has been created and its certificate added to Oracle JDK. New JCE provider code signing certificates issued from this CA will be used to sign JCE providers at a date in the near future. By default, new requests for JCE provider code signing certificates will be issued from this CA.
Existing certificates from the current JCE provider code signing root will continue to validate. However, this root CA may be disabled at some point in the future. We recommend that new certificates be requested and existing provider JARs be re-signed.
For details on the JCE provider signing process, please refer to the "How to Implement a Provider in the Java Cryptography Architecture" documentation.
Inputs to the javax.security.auth.Subject class now prohibit null values in the constructors and modification operations on the Principal and credential Set objects returned by Subject methods.
For the non-default constructor, the principals, pubCredentials, and privCredentials parameters may not be null, nor may any element within the Sets be null. A NullPointerException will be thrown if null values are provided.
For operations performed on Set objects returned by getPrincipals(), getPrivateCredentials() and getPublicCredentials(), a NullPointerException is thrown under the following conditions:
The jarsigner tool has been enhanced to show details of the algorithms and keys used to generate a signed JAR file and will also provide an indication if any of them are considered weak.
Specifically, when "jarsigner -verify -verbose filename.jar" is called, a separate section is printed out showing information of the signature and timestamp (if it exists) inside the signed JAR file, even if it is treated as unsigned for various reasons. If any algorithm or key used is considered weak, as specified in the Security property jdk.jar.disabledAlgorithms
, it will be labeled with "(weak)".
For example:
- Signed by "CN=weak_signer"
Digest algorithm: MD2 (weak)
Signature algorithm: MD2withRSA (weak), 512-bit key (weak)
Timestamped by "CN=strong_tsa" on Mon Sep 26 08:59:39 CST 2016
Timestamp digest algorithm: SHA-256
Timestamp signature algorithm: SHA256withRSA, 2048-bit key
SecureRandom
objects are safe for use by multiple concurrent threads. A SecureRandom
service provider can advertise that it is thread-safe by setting the service provider attribute "ThreadSafe" to "true" when registering the provider. Otherwise, the SecureRandom
class will synchronize access to the following SecureRandomSpi
methods: SecureRandomSpi.engineSetSeed(byte[])
, SecureRandomSpi.engineNextBytes(byte[])
, SecureRandomSpi.engineNextBytes(byte[], SecureRandomParameters)
, SecureRandomSpi.engineGenerateSeed(int)
, and SecureRandomSpi.engineReseed(SecureRandomParameters)
.
More checks are added to the DER encoding parsing code to catch various encoding errors. In addition, signatures which contain constructed indefinite length encoding will now lead to IOException during parsing. Note that signatures generated using JDK default providers are not affected by this change.
Keytool now prints out the key algorithm and key size of a certificate's public key, in the form of "Subject Public Key Algorithm: <size>-bit RSA key", where <size>
is the key size in bits (ex: 2048).
As part of work for JEP 220 "Modular Run-Time Images", security providers loading mechanism is enhanced to support modular providers through java.util.ServiceLoader. The default JDK security providers have been refactored to be modular providers and registered inside java.security file by provider name instead of provider class name. As for providers which have not been re-worked to modules, they should be registered by provider class names in java.security file.
SecureRandom.PKCS11 from the SunPKCS11 Provider is disabled by default on Solaris because the native PKCS11 implementation has poor performance and is not recommended. If your application requires SecureRandom.PKCS11, you can re-enable it by removing "SecureRandom" from the disabledMechanisms list in conf/security/sunpkcs11-solaris.cfg
Performance improvements have also been made in the java.security.SecureRandom class. Improvements in the JDK implementation has allowed for synchronization to be removed from the java.security.SecureRandom.nextBytes(byte[] bytes) method.
The "Sonera Class1 CA" root CA certificate has been removed from the cacerts file.
A new -tsadigestalg option is added to jarsigner to specify the message digest algorithm that is used to generate the message imprint to be sent to the TSA server. In older JDK releases, the message digest algorithm used was SHA-1. If this new option is not specified, SHA-256 will be used on JDK 7 Updates and later JDK family versions. On JDK 6 Updates, SHA-1 will remain the default but a warning will be printed to the standard output stream.
If a jar file was signed with a timestamp when the signer certificate was still valid, it should be valid even after the signer certificate expires. However, jarsigner will incorrectly show a warning that that signer's certificate chain is not validated. This will be fixed in a future release.
In this update, MD5 is added to the jdk.certpath.disabledAlgorithms security property and the use of the MD5 hash algorithm in certification path processing is restricted in the Oracle JRE. Applications using certificates signed with a MD5 hash algorithm should upgrade their certificates as soon as possible.
Note that this is a behavior change of the Oracle JRE. It is not guaranteed that the security property (jdk.certpath.disabledAlgorithms) is examined and used by other JRE implementations.
Eight new root certificates have been added :
QuoVadis Root CA 1 G3 alias: quovadisrootca1g3 DN: CN=QuoVadis Root CA 1 G3, O=QuoVadis Limited, C=BM
QuoVadis Root CA 2 G3 alias: quovadisrootca2g3 DN: CN=QuoVadis Root CA 2 G3
QuoVadis Root CA 3 G3 alias: quovadisrootca3g3 DN: CN=QuoVadis Root CA 3 G3, O=QuoVadis Limited, C=BM
DigiCert Assured ID Root G2 alias: digicertassuredidg2 DN: CN=DigiCert Assured ID Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Assured ID Root G3 alias: digicertassuredidg3 DN: CN=DigiCert Assured ID Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Global Root G2 alias: digicertglobalrootg2 DN: CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Global Root G3 alias: digicertglobalrootg3 DN: CN=DigiCert Global Root G3, OU=www.digicert.com, O=DigiCert Inc, C=US
DigiCert Trusted Root G4 alias: digicerttrustedrootg4 DN: CN=DigiCert Trusted Root G4, OU=www.digicert.com, O=DigiCert Inc, C=US
The JDK uses the Java Cryptography Extension (JCE) Jurisdiction Policy files to configure cryptographic algorithm restrictions. Previously, the Policy files in the JDK placed limits on various algorithms. This release ships with both the limited and unlimited jurisdiction policy files, with unlimited being the default. The behavior can be controlled via the new crypto.policy
Security property found in the <java-home>/lib/java.security
file. Refer to that file for more information on this property.
To meet cryptographic regulations in previous releases, 3rd Party Java Cryptography Extension (JCE) Provider code is required to be packaged in a JAR file and properly signed by a public key/certificate issued by a JCE Certificate Authority (CA). This requirement also exists in JDK 9.
JDK 9 introduces the concept of modules along with some new file formats to support things such as custom images, but signed modules (e.g. signed JMOD files) are not currently supported. The jlink
tool also does not preserve signature information when creating such custom run-time images. Thus all 3rd Party JCE Providers must still be packaged as either signed JAR or signed modular JAR files, and deployed by placing them either on the class path (unnamed modules) or module path (automatic/named modules).
This release introduces several changes to the JCE Jurisdiction Policy files.
Previously, to allow unlimited cryptography in the JDK, separate JCE Jurisdiction Policy files had be be downloaded and installed. The download and install steps are no longer necessary.
Both the strong but "limited" (traditional default) and the "unlimited" policy files are included in this release.
A new Security property (crypto.policy
) was introduced to control which policy files are active. The new default is "unlimited".
The files are now user-editable to allow for customized Policy configuration.
Please see the Java Cryptography Architecture (JCA) Reference Guide for more information.
Also see:
JDK-8186093: java.security configuration file still says that "strong but limited" is the default value
Java SE KeyStore does not allow certificates that have the same aliases. http://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html
However, on Windows, multiple certificates stored in one keystore are allowed to have non-unique friendly names.
The fix for JDK-6483657 makes it possible to operate on such non-uniquely named certificates through the Java API by artificially making the visible aliases unique.
Please note, this fix does not enable creating same-named certificates with the Java API. It only allows you to deal with same-named certificates that were added to the keystore by 3rd party tools.
It is still recommended that your design not use multiple certificates with the same name. In particular, the following sentence will not be removed from the Java documentation: "In order to avoid problems, it is recommended not to use aliases in a KeyStore that only differ in case." http://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html
SunPKCS11 provider has re-enabled support for various message digest algorithms such as MD5, SHA1, and SHA2 on Solaris. If you are using Solaris 10 and experience a CloneNotSupportedException or PKCS11 error CKR_SAVED_STATE_INVALID, you should verify and apply the following patches or newer versions of them: 150531-02 on sparc 150636-01 on x86
For SSL/TLS/DTLS protocols, the security strength of 3DES cipher suites is not sufficient for persistent connections. By adding the "3DES_EDE_CBC" to "jdk.tls.legacyAlgorithms" security property by default in JDK, 3DES cipher suites will not be negotiated unless there are no other candidates during the establishing of SSL/TLS/DTLS connections.
At their own risk, applications can update this restriction in the security property ("jdk.tls.legacyAlgorithms") if 3DES cipher suites are really preferred.
Diffie-Hellman keys less than 1024 bits are considered too weak to use in practice and should be restricted by default in SSL/TLS/DTLS connections. Accordingly, Diffie-Hellman keys less than 1024 bits have been disabled by default by adding DH keySize < 1024
to the jdk.tls.disabledAlgorithms
security property in the java.security
file. Although it is not recommended, administrators can update the security property (jdk.tls.disabledAlgorithms
) and permit smaller key sizes (for example, by setting DH keySize < 768
).
To improve the default strength of EC cryptography, EC keys less than 224 bits have been deactivated in certification path processing (via the "jdk.certpath.disabledAlgorithms" Security Property) and SSL/TLS/DTLS connections (via the "jdk.tls.disabledAlgorithms" Security Property) in JDK. Applications can update this restriction in the Security Properties and permit smaller key sizes if really needed (for example, "EC keySize < 192").
EC curves less than 256 bits are removed from the SSL/TLS/DTLS implementation in JDK. The new System Property, "jdk.tls.namedGroups", defines a list of enabled named curves for EC cipher suites in order of preference. If an application needs to customize the default enabled EC curves or the curves preference, please update the System Property accordingly. For example:
jdk.tls.namedGroups="secp256r1, secp384r1, secp521r1"
Note that the default enabled or customized EC curves follow the algorithm constraints. For example, the customized EC curves cannot re-activate the disabled EC keys defined by the Java Security Properties.
Recent JDK updates introduced an issue for applications that depend on having a delayed provider selection mechanism. The issue was introduced in JDK 8u71, JDK 7u95 and JDK 6u111. The main error seen corresponded to an exception like the following :
handling exception: javax.net.ssl.SSLProtocolException: Unable to process PreMasterSecret, may be too big
A recent issue from the JDK-8148516 fix can cause issue for some TLS servers. The problem originates from an IllegalArgumentException thrown by the TLS handshaker code.
java.lang.IllegalArgumentException: System property jdk.tls.namedGroups(null) contains no supported elliptic curves
The issue can arise when the server doesn't have elliptic curve cryptography support to handle an elliptic curve name extension field (if present). Users are advised to upgrade to this release. By default, JDK 7 Updates and later JDK families ship with the SunEC security provider which provides elliptic curve cryptography support. Those releases should not be impacted unless security providers are modified.
The MD5withRSA signature algorithm is now considered insecure and should no longer be used. Accordingly, MD5withRSA has been deactivated by default in the Oracle JSSE implementation by adding "MD5withRSA" to the "jdk.tls.disabledAlgorithms" security property. Now, both TLS handshake messages and X.509 certificates signed with MD5withRSA algorithm are no longer acceptable by default. This change extends the previous MD5-based certificate restriction ("jdk.certpath.disabledAlgorithms") to also include handshake messages in TLS version 1.2. If required, this algorithm can be reactivated by removing "MD5withRSA" from the "jdk.tls.disabledAlgorithms" security property.
The requirement to have the Authority Key Identifier (AKID) and Subject Key Identifier (SKID) fields matching when building X509 certificate chains has been modified for some cases.
SunJSSE allows SHA224 as an available signature and hash algorithm for TLS 1.2 connections. However, the current implementation of SunMSCAPI does not support SHA224 yet. This can cause problems if SHA224 and SunMSCAPI private keys are used at the same time.
To mitigate the problem, we remove SHA224 from the default support list if SunMSCAPI is enabled.
Ephemeral DH keys less than 768 bits are deactivated in JDK. New algorithm restriction "DH keySize < 768" is added to Security Property "jdk.tls.disabledAlgorithms".
In TLS, a ciphersuite defines a specific set of cryptography algorithms used in a TLS connection. JSSE maintains a prioritized list of ciphersuites. In this update, GCM-based cipher suites are configured as the most preferable default cipher suites in the SunJSSE provider.
In the SunJSSE provider, the following ciphersuites are now the most preferred by default:
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
Note that this is a behavior change of the SunJSSE provider in the JDK, it is not guaranteed to be examined and used by other JSSE providers. There is no guarantee the cipher suites priorities will remain the same in future updates or releases.
After this change, besides implementing the necessary methods (initialize
, login
, logout
, commit
, abort
), any login module must implement the LoginModule
interface. Otherwise a LoginException
will be thrown when the login module is used.
The secure validation mode of the XML Signature implementation has been enhanced to restrict RSA and DSA keys less than 1024 bits by default as they are no longer secure enough for digital signatures. Additionally, a new security property named jdk.xml.dsig.SecureValidationPolicy
has been added to the java.security
file and can be used to control the different restrictions enforced when the secure validation mode is enabled.
The secure validation mode is enabled either by setting the xml signature property org.jcp.xml.dsig.secureValidation
to true with the javax.xml.crypto.XMLCryptoContext.setProperty
method, or by running the code with a SecurityManager
.
If an XML Signature is generated or validated with a weak RSA or DSA key, an XMLSignatureException will be thrown with the message "RSA keys less than 1024 bits are forbidden when secure validation is enabled" or "DSA keys less than 1024 bits are forbidden when secure validation is enabled".
The XML Digital Signature APIs (the javax.xml.crypto
package and subpackages) have been enhanced to better support Generics, as follows:
Collection
and Iterator
parameters and return types have been changed to parameterized typesjavax.xml.crypto.NodeSetData
interface has been changed to a generic type that implements Iterable
so that it can be used in for-each loops
An interoperability issue is found between Java and the native Kerberos implementation on BSD (including macOS) on the kdc_timeout setting in krb5.conf, where Java interpreted it as milliseconds and BSD as seconds when no unit is specified. This code change adds support for the "s" (second) unit. Therefore if the timeout is 5 seconds, Java accepts both "5000" and "5s". Customers concerned about interoperability between Java and BSD should use "5s".
This JDK release introduces some changes to how Kerberos requests are handled when a security manager is present.
Note that if a security manager is installed while a KerberosPricipal is being created, a {@link ServicePermission} must be granted and the service principal of the permission must minimally be inside the {@code KerberosPrincipal}'s realm. For example, if the result of {@code new KerberosPrincipal("user")} is {@code user@EXAMPLE.COM}, then a {@code ServicePermission} with service principal {@code host/www.example.com@EXAMPLE.COM} (and any action) must be granted.
Also note that if a single GSS-API principal entity that contains a Kerberos name element without providing its realm is being created via the org.ietf.jgss.GSSName interface and a security manager is installed, then this release introduces a new requirement. A {@link javax.security.auth.kerberos.ServicePermission ServicePermission} must be granted and the service principal of the permission must minimally be inside the Kerberos name element's realm. For example, if the result of{@link GSSManager#createName(String, Oid) createName("user", NT_USER_NAME)} contains a Kerberos name element {@code user@EXAMPLE.COM}, then a {@code ServicePermission} with service principal {@code host/www.example.com@EXAMPLE.COM} (and any action) must be granted. Otherwise, the creation will throw a {@link GSSException} containing the {@code GSSException.FAILURE} error code.
The hash algorithm used in the Kerberos 5 replay cache file (rcache) is updated from MD5 to SHA256 with this change. This is also the algorithm used by MIT krb5-1.15. This change is interoperable with earlier releases of MIT krb5, which means Kerberos 5 acceptors from JDK 9 and MIT krb5-1.14 can share the same rcache file.
A new system property named jdk.krb5.rcache.useMD5 is introduced. If the system property is set to "true", JDK 9 will still use the MD5 hash algorithm in rcache. This is useful when both of the following conditions are true: 1) the system has a very coarse clock and has to depend on hash values in replay attack detection, and 2) interoperability with earlier versions of JDK for rcache files is required. The default value of this system property is "false".
The end times for native TGTs (ticket-granting tickets) are now compared with UTC time stamps.
javac was erroneously accepting receiver parameters in annotations methods. This implies that tests cases like the one below were being accepted:
@interface MethodRun {
int value(MethodRun this);
}
The JLS 8, see JLS8 9.6.1, doesn't allow any formal parameter in annotation methods, this extends to receiver parameters. More specifically, the grammar for annotation types does not allow arbitrary method declarations, instead allowing only AnnotationTypeElementDeclarations. The allowed syntax is:
AnnotationTypeElementDeclaration:
{AnnotationTypeElementModifier} UnannType Identifier ( ) [Dims] [DefaultValue];
Note that nothing is allowed between the parentheses.
The compiler specification, see JLS8 18.5.2, modified the treatment of nested generic method invocations for which the return type is an inference variable. The compiler has been adapted to implement the new logic. This is important to minimize incompatibility with the javac 7 inference algorithm. Three cases are considered:
The compiler update implies an eager resolution for generic method invocations, provided that the return type is an inference variable.
Prior to JDK 9, javac set the 'static' modifier on anonymous classes declared in a static context, e.g., in static methods or static initialization blocks. This contradicts the Java Language Specification, which states that anonymous classes are never static. In JDK 9, javac does not mark anonymous classes 'static', whether they are declared in a static context or not.
The support for "argument files" on the command lines for javac, javadoc, and javah, has been updated to better align with the support for argument files on the launcher command line. This includes the following two new features:
Some obscure, undocumented escape sequences are no longer supported. The files are still read using the default platform file encoding, whereas argument files on the launcher command line should use ASCII or an ASCII-compatible encoding, such as UTF-8.
Output directories required by javac, specified with the -d, -s, -h options, will be created if they do not already exist.
The classfile format (see JVMS section 4.7.2) defines an attribute called ConstantValue
which is used to describe the constant value associated with a given (constant) field. The layout of this attribute is as follows:
ConstantValue_attribute {
u2 attribute_name_index;
u4 attribute_length;
u2 constantvalue_index;
}
Historically, javac
has never performed any kind of range validation of the value contained in the constant pool entry at constantvalue_index
. As such, it is possible for a constant field of type e.g. boolean
to have a constant value other than 0
or 1
(the only legal values allowed for a boolean). Starting from JDK 9, javac
will start detecting ill-formed ConstantValue
attributes, and report errors if out-of-range values are found.
javac
does not generate unchecked warnings when checking method reference return types.
import java.util.function.*;
import java.util.*;
class Test {
void m() {
IntFunction<List<String>[]> sls = List[]::new; //warning
Supplier<List<String>> sls = this::l; //warning
}
List l() { return null; }
}
Starting from JDK 9, javac
will emit a warning when unchecked conversion is required for a method reference to be compatible with a functional interface target.
This change brings the compiler in sync with JLS section 15.13.2:
A compile-time unchecked warning occurs if unchecked conversion was necessary for the compile-time declaration to be applicable, and this conversion would cause an unchecked warning in an invocation context.
and,
A compile-time unchecked warning occurs if unchecked conversion was necessary for the return type R', described above, to be compatible with the function type's return type, R, and this conversion would cause an unchecked warning in an assignment context.
Javac was not in sync with JLS 8 §15.12.1, specifically:
If the form is TypeName . super . [TypeArguments] Identifier, then: ...
Let T be the type declaration immediately enclosing the method invocation. It is a compile-time error if I is not a direct superinterface of T, or if there exists some other direct superclass or direct superinterface of T, J, such that J is a subtype of I.
So javac was not issuing a compiler error for cases like:
interface I {
default int f(){return 0;}
}
class J implements I {}
class T extends J implements I {
public int f() {
return I.super.f();
}
}
The compiler had some checks for method invocations of the form:
TypeName . super . [TypeArguments] Identifier
but there was one issue. If TypeName
is an interface I
and T
is the type declaration immediately enclosing the method invocation, the compiler must issue a compile-time error if there exists some other direct superclass or superinterface of T
, let's call it J
such that J
is a subtype of I
, as in the example above.
Reporting previously silent errors found during incorporation, JLS 8 §18.3, was supposed to be a clean-up with performance only implications. But consider the test case:
import java.util.Arrays;
import java.util.List;
class Klass {
public static <A> List<List<A>> foo(List<? extends A>... lists) {
return foo(Arrays.asList(lists));
}
public static <B> List<List<B>> foo(List<? extends List<? extends B>> lists) {
return null;
}
}
This code was not accepted before the patch for [1], but after this patch the compiler is accepting it. Accepting this code is the right behavior as not reporting incorporation errors was a bug in the compiler.
While determining the applicability of method:
<B> List<List<B>> foo(List<? extends List<? extends B>> lists)
For which we have the constraints:
b <: Object
t <: List<? extends B>
t<: Object
List<? extends A> <: t
First, inference variable b is selected for instantiation:
b = CAP1 of ? extends A
so this implies that:
t <: List<? extends CAP1 of ? extends A>
t<: Object
List<? extends A> <: t
Now all the bounds are checked for consistency. While checking if List<? extends A> is a subtype of List<? extends CAP1 of ? extends A> a bound error is reported. Before the compiler was just swallowing it. As now the error is reported while inference variable b is being instantiated, the bound set is rolled back to it's initial state, 'b' is instantiated to Object, and with this instantiation the constraint set is solvable, the method is applicable, it's the only applicable one and the code is accepted as correct. The compiler behavior in this case is defined at JLS 8 §18.4
This fix has source compatibility impact, right now code that wasn't being accepted is now being accepted by the javac compiler. Currently there are no reports of any other kind of incompatibility.
[1] https://bugs.openjdk.java.net/browse/JDK-8078024
The javac compiler's behavior when handling wildcards and "capture" type variables has been improved for conformance to the language specification. This improves type checking behavior in certain unusual circumstances. It is also a source-incompatible change: certain uses of wildcards that have compiled in the past may fail to compile because of a program's reliance on the javac bug.
The javadoc tool will now reject any occurrences of JavaScript code in the javadoc documentation comments and command-line options, unless the command-line option, --allow-script-in-comments
is specified.
With the --allow-script-in-comments
option, the javadoc tool will preserve JavaScript code in documentation comments and command-line options. An error will be given by the javadoc tool if JavaScript code is found and the command-line option is not set.
If any errors are encountered while reading or analyzing the source code, the javadoc tool will treat them as unrecoverable errors and exit.
Previously javadoc would emit "public" and "abstract" modifiers for methods and fields in annotation types. These flags are not needed in source code and are elided for non-annotation interface types. With this change, those modifiers are also omitted for methods and fields defined in annotation types.
Previously javadoc would include "value=" when displaying annotations even when that text was not necessary in the source because the annotations were of single-element annotation type (JLS 9.6. Annotation Type Elements ). The extraneous "value=" text is now omitted, leading to more concise annotation display.
In previous releases, on platforms that supported more than one VM, the launcher could use ergonomics to select the Server VM over the Client VM. Ergonomics would identify a "server-class" machine based on the number of CPUs and the amount of memory. With modern hardware platforms most machines are identified as server-class, and so now, only the Server VM is provided on most platforms. Consequently the ergonomic selection is redundant and has been removed. Users are advised to use the appropriate launcher VM selection flag on those systems where multiple VMs still exist.
The @Deprecated
annotation was incorrectly added to the newFactory()
method in javax.xml.stream.XMLInputFactory
. The method should not be deprecated. The newInstance()
method can be used to avoid the deprecation warning. A future release will correct this.
In accordance with XSL Transformations (XSLT) Version 1.0 (http://www.w3.org/TR/xslt), the xsl:import
element is only allowed as a top-level element. The xsl:import
element children must precede all other element children of an xsl:stylesheet
element, including any xsl:include
element children.
The JDK implementation has previously allowed the xsl:import
element erroneously placed anywhere in a stylesheet. This issue has been fixed in the JDK 9 release. The JDK implementation now rejects any XSLT stylesheets with erroneously placed import elements.
The defining class loader of java.xml.ws, java.xml.bind, and java.activation module and their classes is changed to the platform class loader (non-null) (see the specification for java.lang.ClassLoader::getPlatformClassLoader
).
Existing code that assumes the defining class loader of JAX-WS, JAXB, JAF classes may be impacted by this change (e.g. custom class loader delegation to the bootstrap class loader skipping the extension class loader).
The wsimport tool has been changed to disallow DTDs in Web Service descriptions, specifically:
To restore to the previous behavior:
JAXBContext
specifies that classes annotated with @XmlRootType
should be specified at context creation time to JAXBContext.newInstance(Class[] classesToBeBound, ...)
If the client classes are in a named module, than openness of the packages containing these classes is not propagated correctly when the root JAXB classes reference JAXB types in another package.
For example, given the following Java classes:
@XmlRootElement2 class Foo { Bar b;}
@XmlType class Bar { FooBar fb;}
@XmlType class FooBar { int x; }
The invocation of JAXBContext.newInstance(Foo.class
) registers Foo
and the statically referenced classes, Bar
and FooBar.
If Bar
and FooBar
are in different package than Foo
, than openness is not propagated for them with current implementation.
The issue can be worked around by opening the package with the opens
directive in the module declaration. Alternatively, the --add-opens
command line option can be used to open the package:
For example: --add-opens foo.mymodule/bar.baz=ALL-UNNAMED
(for JAXB-RI on classpath) --add-opens foo.mymodule/bar.baz=<jaxb-impl>
(for JABX implementations on the application module path)
An event-based XML parsers may return character data in chunks.
SAX specification: states that SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks.
StAX specification: did not specify explicitly.
The JDK implementation before JDK 9 returns all character data in a CData section in a single chunk by default. As of JDK 9, an implementation-only property jdk.xml.cdataChunkSize
is added to instruct a parser to return the data in a CData section in a single chunk when the property is zero or unspecified, or in multiple chunks when it is greater than zero. The parser will split the data by linebreaks, and any chunks that are larger than the specified size to ones that are equal to or smaller than the size.
The property
jdk.xml.cdataChunkSize
is supported through the following means:
SAXParser
or XMLReader
for SAX, and XMLInputFactory
for StAX. If the property is set, its value will be used in preference over any of the other settings.jaxp.properties
file. The value in jaxp.properties
may be overridden by the system property or an API setting.
The JAXP library through the Transformer
and LSSerializer
supports a pretty print feature that can format the output by adding whitespaces and newlines to produce a more readable form of an XML document. As of the JDK 9 release, this feature has been enhanced to generate a format similar to that of the major web browsers. In addition, the xml:space
attribute as defined in the XML specification (https://www.w3.org/TR/2006/REC-xml-20060816/#sec-white-space) is now supported.
The Pretty-Print
feature does not define the actual format. The output format can change over time or vary from implementation to implementation, and therefore should not be relied on for exact text comparison. It is recommended that such applications turn off the Pretty-Print
feature and perform an XML to XML comparison.
Before the Java SE 9 release, the DOM API package in org.w3c.dom
included sub-packages that were not defined as a part of the Jave SE API. As of Java SE 9, these sub-packages are moved out of the java.xml
module to a separate module called jdk.xml.dom
. These packages are as follows:
org.w3c.dom.css
org.w3c.dom.html
org.w3c.dom.stylesheets
org.w3c.dom.xpath
The JAXP library in JDK 9 has been updated to Xerces-J 2.11.0 release in the following areas:
This update includes improvement and bug fixes in the above areas up to the Xerces-J 2.11.0 release, but not the experimental support for XML Schema 1.1 features. Refer to Xerces-J 2.11.0 Release Notes for more details.
The class path specified to the java launcher is expected to be a sequence of file paths. Previous releases incorrectly accepted a Windows path with a leading slash, for example -classpath /C:/classes
, which is not the intended behavior. The implementation of the application class loader has been changed in JDK 9 to use the new file system API which detects that /C:/classes
is not a valid file path. Existing applications specifying a file URI on the class path will need to change and specify a valid file path.