Fin Should Honor Docksal_volumes=nfs With Docker For Mac

Fin Should Honor Docksal_volumes=nfs With Docker For Mac Rating: 4,3/5 4009 votes

Notes. This release notes document does not include security related fixes. For a list of security related fixes and advisories, see the Citrix security bulletin. This build includes fixes for the following 27 issues that existed in the previous NetScaler 11.1 release build:,. The known issues section is cumulative. It includes issues newly found in this release, and issues that were not fixed in previous NetScaler 11.1 releases.

The # XXXXXX labels under the issue descriptions are internal tracking IDs used by the NetScaler team. Additional Changes/Fixes Available in Versions. LLDP is a layer 2 protocol that enables a NetScaler appliance to advertise its identity and capabilities to the directly connected (neighbor) devices, and to learn the identity and capabilities of these neighbor devices. In a cluster setup, the NetScaler GUI and NetScaler CLI now display the LLDP neighbour configuration of all or specific cluster nodes when the GUI or CLI is accessed through the Cluster IP address (CLIP). Any change made to the global level LLDP mode is applied to the global level LLDP mode on each of the cluster nodes. You can now collect statistics of the DNS responses served from cache and use these statistics to create a threshold beyond which additional DNS traffic is dropped.

You can enforce the threshold with a bandwidth based policy. Previously, bandwidth calculation for a DNS load balancing virtual server was not accurate, because the number of cache hits was not reported. In proxy mode, the statistics for Request bytes, Response bytes, Total Packets rcvd, and Total Packets sent statistics are continuously updated. Previously, these statistics were not always updated, particularly for a DNS load balancing virtual server. The NetScaler appliance now supports the EDNS0 client subnet (ECS) option in deployments that include the NetScaler appliance configured as an ADNS server authoritative for a GSLB domain. In the deployment, if you use static proximity as the load balancing method, you can now use the IP subnet in the ECS option, instead of using the LDNS IP address, to determine the geographical proximity of the client.

In the case of proxy mode deployment, the appliance forwards a DNS query with the ECS option as-is to the back-end servers and does not cache DNS responses that include the ECS option. The NetScaler appliance now supports the EDNS0 client subnet (ECS) option in deployments that include the NetScaler appliance configured as an ADNS server authoritative for a GSLB domain. In the deployment, if you use static proximity as the load balancing method, you can now use the IP subnet in the ECS option, instead of using the LDNS IP address, to determine the geographical proximity of the client. In the case of proxy mode deployment, the appliance forwards a DNS query with the ECS option as-is to the back-end servers and does not cache the DNS responses that include ECS option. This enhancement gives you more control over SSL-based monitoring of back-end servers, by enabling you to bind an SSL profile to a monitor. An SSL profile contains SSL parameters, cipher bindings, and ECC bindings.

For example, you can set server authentication, ciphers, and protocol version in an SSL profile and bind the profile to a monitor. Note that to perform server authentication, you must also bind a CA certificate to a monitor. To perform client authentication, you must bind a client certificate to the monitor.

New parameters for the 'bind lb monitor' command enable you to do so. In certain cases, cores of a NetScaler appliance might not be synchronized, because a core-to-core monitoring or service update has not reached one of the cores. For example, if the core that owns persistency has not received notification that a service is DOWN, that service remains in the persistency table. If a traffic-owner core that has been notified that the service is DOWN finds it in the persistency table, it requests a different service from the persistency-owner core, so that it can redirect the request. Before this enhancement, if the persistency owner returned the same service, the traffic-owner core dropped the user's request. Now, instead of immediately dropping the request, the traffic owner queries the persistency owner a second time.

Sending the second query usually gives the persistency owner enough time to have received the update, in which case it returns a different service. The global AAA parameter 'set aaa param -maxaAAUser ' has been enhanced to automatically increase or decrease when new concurrent user (CCU) licenses are added or removed. Previously, adjusting the MaxAAAUser count was a manual adjustment that needed to be done after extra licenses were added.

This value represents the maximum number of global AAA sessions that can exist. If you want to restrict the number of AAA sessions to a value lower than the licensed limit, you can set the maxaAAUser parameter on the gateway virtual server. During certificate authentication, if only one certificate is present on a client's computer, it is now chosen by default. The user is no longer prompted to select a certificate. However, if two or more certificates are present, the user is prompted to select a certificate. Additionally, if the certificate is successfully authenticated, the certificate preference is automatically saved. The preference is removed if the certificate authentication later fails, or if the user manually clears the saved certificate option by setting NetScaler Gateway Plugin preferences.

The NetScaler supports using a source port from a specified port range for communicating to the servers. One of the use case of this feature is for servers that are configured to identify received traffic belonging to a specific set on the basis of source port for logging and monitoring purposes.

For example, identifying internal and external traffic for logging purpose. For more information, http://docs.citrix.com/en-us/netscaler/11-1/load-balancing/load-balancing-manage-clienttraffic/use-specified-sourceport.html. Some situations might demand that the NetScaler appliance drops specific outgoing packets instead of routing them, for example, in testing cases and during deployment migration. NULL policy based routes can be used to drop specific outgoing packets. A NULL PBR is a type of PBR that has the nexthop parameter set to NULL.

The NetScaler appliance drops outgoing packets that match a NULL PBR. For more information, see http://docs.citrix.com/en-us/netscaler/11-1/networking/ip-routing/configuring-policy-based-routes/null-policy-based-routes-drop-outgoing-packets.html.

In an HA setup, connection failover (or connection mirroring) refers to the process of keeping an established TCP or UDP connection active when a failover occurs. The primary appliance sends messages to the secondary appliance to synchronize current information about the RNAT connections. The secondary appliance uses this connection information only in the event of a failover. When a failover occurs, the new primary NetScaler appliance has information about the connections established before the failover and hence continues to serve those connections even after the failover. From the client's perspective this failover is transparent. During the transition period, the client and server may experience a brief disruption and retransmissions. CloudBridge Connector tunnels can now be used to extend an enterprise's VLAN to a cloud.

VLANs extended from multiple enterprises can have overlapping VLAN ID. You can now isolate each enterprise's VLANs, by mapping them to a unique VXLAN in the cloud. On a NetScaler appliance, which is the CloudBridge connector endpoint in the cloud, you can configure a VXLAN-VLAN map that links an enterprise's VLANs to a unique VXLAN in the cloud. VXLANs now support VLAN tagging for extending multiple VLANs of an enterprise from CloudBridge Connector to the same VXLAN. A NetScaler appliance can now play the service-function role in a SFC architecture. The NetScaler appliance receives packets with Network Service headers and, upon performing the service, modifies the NSH bits in the response packet to indicate that the service has been performed.

In that role, the appliance supports symmetric service chaining with features (for example, INAT, TCP and UDP load balancing services, and routing). The NetScaler appliance as service-function does not support IPv6 and Reclassification.

In a load balancing configuration in DSR mode using TOS field, monitoring its services requires a TOS monitor to be created and bound to these services. A separate TOS monitor is required for each load balancing configuration in DSR mode using TOS field, because a TOS monitor requires the VIP address and the TOS ID to create an encoded value of the VIP address. The monitor creates probe packets in which the TOS field is set to the encoded value of the VIP address. It then sends the probe packets to the servers represented by the services of a load balancing configuration. With a large number of load balancing configurations, creating a separate custom TOS monitor for each configuration is a big, cumbersome task. Managing these TOS monitors is also a big task. Now, you can create wildcard TOS monitors.

You need to create only one wildcard TOS monitor for all load balancing configurations that use the same protocol (for example, TCP or UDP). In a cluster deployment of NetScaler appliances, you can use the new command 'show prop status' for faster monitoring and troubleshooting of issues related to command-propagation failure on non-CCO nodes.

This command displays up to 20 of the most recent command propagation failures on all non-CCO nodes. You can use either the NetScaler command line or the NetScaler GUI to perform this operation after accessing them through the CLIP address or through the NSIP address of any node in the cluster deployment. In a cluster deployment, when the client-side or server side-link to a node goes down, traffic is steered to this node through the peer nodes for processing. Previously, the steering of traffic was implemented on all nodes by configuring dynamic routing and adding static ARP entries pointing to the special MAC address of each node.

If there are a large number of nodes in a cluster deployment, adding and managing static ARP entries with special MAC addresses on all the nodes is a cumbersome task. Now, nodes implicitly use special MAC addresses for steering packets. Therefore, static ARP entries pointing to special MAC addresses no longer have to be added to the cluster nodes. NetScaler VPX on AWS cloud now supports IAM roles. IAM roles are designed for AWS applications to securely make API requests from their instances, without requiring users to manage the security credentials that the applications use. The user can define which accounts or AWS services can assume the roles.

The application is granted the permissions for the actions and resources that the user has defined for the role through the security credentials associated with the role. An application on the instance retrieves the security credentials provided by the role from instance metadata item iam/security-credentials/role-name. These security credentials are temporary and are renewed automatically. New credentials are available at least five minutes before the expiration of the old credentials. To avoid unnecessary congestion when each client requests the revocation status of a server certificate during an SSL handshake, the NetScaler appliance now supports OCSP stapling.

That is, the appliance can now send the revocation status of a server certificate to a client, at the time of the SSL handshake, after validating the certificate status from an OCSP responder. The revocation status of a server certificate is 'stapled' to the response the appliance sends to the client as part of the SSL handshake. To use the OCSP stapling feature, you must enable it on an SSL virtual server and add an OCSP responder on the appliance. NetScaler appliances now support the SessionTicket TLS extension.

Use of this extension indicates that the session details are stored on the client instead of on the server. The client must indicate that it supports this mechanism by including the session ticket TLS extension in the client Hello message. For new clients, this extension is empty. The server sends a new session ticket in the NewSessionTicket handshake message. The session ticket is encrypted with a key known only to the server. If a server cannot issue a new ticket at this time, it completes a regular handshake.

The new MPX/SDX 14000 FIPS platform contains one primary card and one or more secondary cards. If you enable the hybrid FIPS mode, the pre-master secret decryption commands are run on the primary card because the private key is stored on this card, but the bulk encryption and decryption is offloaded to a secondary card. This significantly increases the bulk encryption throughput on a MPX/SDX 14000 FIPS platform as compared to non-hybrid FIPS mode and the existing MPX 9700/0/15000 FIPS platform. Enabling the hybrid FIPS mode also increases the SSL transactions per second on this platform. TCP Fast Open (TFO) is a TCP mechanism that enables speedy and safe data exchange between a client and a server during TCP's initial handshake. This feature is available as a TCP option in the TCP profile bound to a virtual server of a NetScaler appliance. TFO uses a TCP Fast Open Cookie (a security cookie) that the NetScaler appliance generates to validate and authenticate the client initiating a TFO connection to the virtual server.

By using the TFO mechanism, you can reduce an application's network latency and the delay experienced in short TCP transfers. TCP Fast Open (TFO) is a TCP mechanism that enables speedy and safe data exchange between a client and a server during TCP's initial handshake. This feature is available as a TCP option in the TCP profile bound to a virtual server of a NetScaler appliance. TFO uses a TCP Fast Open Cookie (a cryptographic cookie) that the NetScaler appliance generates to validate the client initiating a TFO connection to the virtual server.

By using the TFO mechanism, you can reduce an application's network latency and the delay experienced in short TCP transfers. Because of the imminent exhaustion of IPv4 addresses, ISPs have started transitioning to IPv6 infrastructure.

Mac

But during the transition, ISPs must continue to support IPv4 along with IPv6, because most of the public Internet still uses IPv4. Large scale NAT64 is an IPv6 transition solution for ISPs with IPv6 infrastructure to connect their IPv6-only subscribers to the IPv4 Internet. DNS64 is a solution for enabling discovery of IPv4-only domains by IPv6-only clients. DNS64 is used with large scale NAT64 to enable seamless communication between IPv6-only clients and IPv4-only servers.

Fin Should Honor Docksal Volumes=nfs With Docker For Machine

Because of the imminent exhaustion of IPv4 addresses, ISPs have started transitioning to IPv6 infrastructure. But during the transition, ISPs must continue to support IPv4 along with IPv6, because most of the public Internet still uses IPv4.

Large scale NAT64 is an IPv6 transition solution for ISPs with IPv6 infrastructure to connect their IPv6-only subscribers to the IPv4 Internet. DNS64 is a solution for enabling discovery of IPv4-only domains by IPv6-only clients.

DNS64 is used with large scale NAT64 to enable seamless communication between IPv6-only clients and IPv4-only servers. The NetScaler appliance can now log request header information of an HTTP connection that is using the NetScaler's DS-Lite functionality. The HTTP header logs can be used by ISPs to see the trends related to the HTTP protocol among a set of subscribers.

For example, an ISP can use this feature to find out the most popular website among a set of subscribers. For more information, see http://docs.citrix.com/en-us/netscaler/11-1/netscaler-support-for-telecom-service-providers/dual-stack-lite/logging-monitoring-DS-Lite.html. Another simple method is to use wildcard ports in a static mapping entry. You just need to create one static mapping entry with NAT-port and subscriber-port parameters set to the wildcard character (.), and the protocol parameter set to ALL, to expose all the ports of a subscriber to the Internet. For a subscriber's inbound or outbound connections matching a wildcard static mapping entry, the subscriber's port does not change after the NAT operation. For more information, see http://docs.citrix.com/en-us/netscaler/11-1/netscaler-support-for-telecom-service-providers/lsn-introduction/configuring-static-lsn-maps.html.

For a subscriber's inbound or outbound connections matching a wildcard static mapping entry, the subscriber's port does not change after the NAT operation. When a subscriber-initiated connection to the Internet matches a wildcard static mapping entry, the NetScaler appliance assigns a NAT port that has the same number as the subscriber port from which the connection is initiated. Similarly, an Internet host gets connected to a subscriber's port by connecting to the NAT port that has the same number as the subscriber's port. Left angle bracket to HTML character entity equivalent This prevents browsers from interpreting unsafe html tags, such as. A NetScaler appliance configured as an DNS end resolver sometimes fails to respond to DNS queries.

When the appliance is configured as an end resolver, it generates iterative DNS queries to name servers on behalf of the client and returns the final responses. If a DNS zone has multiple NS records, the appliance queries the first name server in the NS record. If this resolution fails, the appliance does not retry with other name servers in the NS records, and it does not send any response to the client.

In a high availability (HA) setup, after a forced HA synchronization, the configuration is first cleared and then reapplied on the secondary node. As part of the synchronization operation, the service state changes are logged in the ns.log file. Repeated forced synchronizations can flood the ns.log file.

Fin Should Honor Docksal_volumes=nfs With Docker For Mac Download

However, the service state messages are applicable only to the primary node and not relevant to the secondary node. Therefore, these messages are not logged in the ns.log file on the secondary node.

Posted on