Date | Description |
March 3, 2018 | Release 3.1(2m) became available. |
March 12, 2018 | In the New Software Features section, added the following item: Cisco ACI Multi-Site support on the Cisco N9K-C9364C switch and N9K-C9508-FM-E2 and N9K-C9516-FM-E2 fabric modules |
March 15, 2018 | In the New Software Features section, added the following item: Cloud Foundry Integration with Cisco ACI |
March 22, 2018 | In the Changes in Behavior section, added the following item: If no statistics have been generated on a path in the lifetime of the fabric, no atomic counters are generated for the path. Also, the Traffic Map in the Visualization tab (Operations > Visualization in the Cisco APIC GUI) does not show all paths, only the active paths (paths that had traffic at some point in the fabric lifetime). |
March 30, 2018 | In the Miscellaneous Compatibility Information section, changed the Cisco AVS release to 5.2(1)SV3(3.21) and added Cisco ACI Virtual Edge 1.1(2a). |
April 29, 2018 | 3.1(2o): Release 3.1(2o) became available. Added the open and resolved bugs for this release. |
May 18, 2018 | In the New Software Features section, added the following item: Graceful Maintenance on switch maintenance groups |
May 22, 2018 | 3.1(2p): Release 3.1(2p) became available. Added the resolved bugs for this release. |
June 4, 2018 | In the Changes in Behavior section, added the following item: The Cisco APIC-generated SNMP traps now include variable binding (varbind) timeticks. |
June 7, 2018 | 3.1(2q): Release 3.1(2q) became available. Added the resolved bugs for this release. |
June 13, 2018 | 3.1(2m): In the Known Behaviors section, added the bulleted list item that begins with: A fault is raised for a VMware VDS, Cisco ACI Virtual Edge, or Cisco AVS VMM domain indicating that complete tagging information could not be retrieved for a controller. |
June 26, 2018 | 3.1(2m): In the Open Bugs section, added bug CSCvi29916. 3.1(2o): In the Resolved Bugs section, added bug CSCvi29916. |
November 14, 2018 | 3.1(2s): Release 3.1(2s) became available. Added the resolved bugs for this release. |
November 21, 2018 | 3.1(2m): In the Open Bugs section, added bug CSCvn15374. |
January 7, 2019 | 3.1(2t): Release 3.1(2t) became available; there are no changes to this document for this release. |
January 16, 2019 | In the New Software Features section, removed 'Maximum MTU increased to 9216.' This information was erroneously included; the maximum MTU was not increased in this release. |
3.1(2u): Release 3.1(2u) became available; there are no changes to this document for this release. |
Description | Guidelines and Restrictions | |
BGP external routed network with the autonomous system override | The autonomous system override function replaces the autonomous system number from the originating router with the autonomous system number of the sending BGP router in the autonomous system path of the outbound routes. For more information, see the Cisco APIC Layer 3 Networking Configuration Guide. | None. |
Cisco ACI Multi-Site support on the Cisco N9K-C9364C switch and N9K-C9508-FM-E2 and N9K-C9516-FM-E2 fabric modules | Cisco ACI Multi-Site is now supported on the Cisco N9K-C9364C switch and N9K-C9508-FM-E2 and N9K-C9516-FM-E2 fabric modules. | None. |
Cloud Foundry Integration with Cisco ACI | Beginning in this release, Cloud Foundry is integrated with Cisco Application Centric Infrastructure (ACI). This feature enables customers to use all Cisco ACI security and policy features with Cloud Foundry containers. Cloud Foundry is a platform as a service (PaaS) that uses Linux containers to deploy and manage applications. For more information, see the Cisco ACI and Cloud Foundry Integration knowledge base article and the Cisco Application Policy Infrastructure Controller OpenStack and Container Plugins, Release Notes. | Cisco ACI integration applies to Cloud Foundry deployed on VMware vSphere where the Cisco ACI provides the network fabric for VMware vSphere. |
Graceful Maintenance on switch maintenance groups | In this release, when a user upgrades the Cisco ACI Fabric, there is now an option to enable Graceful Maintenance when upgrading the maintenance groups. When this option is enabled, the Cisco APIC will put the switches into the existing graceful insertion and removal (GIR) mode before reloading. This allows the switch to shut down all protocols gracefully before reloading for the upgrade. | This feature can only be used when all nodes in the fabric are upgraded to release 3.1(2) or later. Using this feature to upgrade nodes on a version prior to 3.1(2) can result in unexpected traffic loss when the nodes are being upgraded. |
LACP support on Layer 2/Layer 3 traffic diversion for the graceful insertion and removal mode | The existing graceful insertion and removal (GIR) mode supports all Layer 3 traffic diversion. With LACP, all of the Layer 2 traffic is also diverted to the redundant node. After a node goes into maintenance mode, LACP running on the node immediately informs neighbors that it can no longer be aggregated as part of a port channel. All traffic is then diverted to the vPC peer node. For more information, see the Cisco APIC Getting Started Guide. | None. |
Neighbor discovery router advertisement on Layer 3 Outsides | Router solicitation/router advertisement packets are used for auto-configuration and are configurable on Layer 3 interfaces, including routed interface, Layer 3 sub-interface, and SVI (external and pervasive). For more information, see the Cisco APIC Layer 3 Networking Configuration Guide. | None. |
QoS for Layer 3 Outsides | In this release, QoS policy enforcement on L3Out ingress traffic is enhanced. To configure QoS policies in an L3Out, the VRF instance must be set in egress mode (Policy Control Enforcement Direction = 'egress') with policy control enabled (Policy Control Enforcement Preference = 'Enforced'). You must configure the QoS class priority or DSCP setting in the contract that governs the Layer 3 external network. For more information, see the Cisco APIC Layer 3 Networking Configuration Guide. | None. |
Bug ID | Description | Exists in |
When using mongoDB for Cisco ACI apps, the app must be deleted before the upgrade and reinstalled after the upgrade to avoid corruption of application data. | 3.1(2m) and later | |
The zoning rule installation incorrectly permitted all of the traffic to and from the L3Out to EPG as the default permit without redirecting the traffic to the service node. | 3.1(2m) and later | |
If a service graph is initially configured as uni-directional and later a filter is added and the service graph is made bi-direction, the service graph enters a faulty state. | 3.1(2m) and later | |
When two different service chains (one for IPv4 traffic and the other for IPv6 traffic) are using the same exact forwarding path (the same bridge domain and VLAN on the service nodes), but different redirect policies (one for IPv4 and the other for IPv6), then the IP SLA objects might not be cleaned up when the contract is detached from the service graph. | 3.1(2m) and later | |
A newly-created VFC shows the admin state as down. | 3.1(2m) and later | |
In Cloud Orchestrator Mode, dual stack (that is, IPv4 and IPv6) cannot be configured on the same interface. | 3.1(2m) and later | |
Whenever a user expands a drop-down list that has many policies (>10K), the GUI tries to show all of them at once, which causes a timeout from the server, after which the server tries again. This process continues and the GUI becomes unresponsive. | 3.1(2m) and later | |
The Cisco AVS DPA process might crash on AVS after upgrading to a 3.1.x release. This does not always occur. When it does occur, the DPA process restarts automatically. | 3.1(2m) and later | |
There is an exhaustion of LPM entries due to the programming of an unnecessary EPG subnet gateway IP address. | 3.1(2m) and later | |
There is an issue with upgrading in the following situation: ■ A node is decommissioned a few hours before initiating an upgrade. ■ An upgrade is triggered with the 'doNotPause' flag turned on in the maintenance group. ■ The node is in the maintenance group that is being upgraded. In this situation, the upgrade stalls while waiting for that node to complete the upgrade. | 3.1(2m) and later | |
On decommissioning and recomissioning of the Cisco APICs or nodes, extra prefix white list entries might be created on the TOR switches. | 3.1(2m) and later | |
VMs on ESXi hosts running Cisco AVS software are unable to join the network. When running the 'vemcmd show port' command, you can see that the VM is stuck in a 'BLK' state with reason of 'WAIT EPP.' | 3.1(2m) | |
A fault is raised that specifies problem that occurred while retrieving tagging information for a VMM controller. Inventory pull from the VMware vCenter takes a long time (>10 minutes) and it continuously completes with a partial inventory result. The processing of events from VMware vCenter is delayed, which may result in delays for the downloading of policies to the leaf switches when EPGs are deployed on-demand at the VMM domain. This would affect connectivity for newly deployed VMs or VMs which have been vMotioned. | 3.1(2m) | |
When upgrading Cisco APICs, constant heartbeat loss is seen, which causes the Cisco APICs to lose connectivity between one another. In the Cisco APIC appliance_director logs, the following message is seen several hundred times during the upgrade: appliance_director||DBG4||...||Lost heartbeat from appliance id= ... appliance_director||DBG4||...||Appliance has become unavailable id= ... On the switches, each process (such as policy-element) see rapidly changing leader elections and minority states: adrs_rv||DBG4||||Updated leader election on replica=(6,26,1) | 3.1(2m) and later | |
When values under a custom QoS policy change, the changes are not reflected under the service graph that is attached to the profile. | 3.1(2o) and later |
Bug ID | Description | Fixed in |
This issue occurs while using the import-config file command on the CLI where the file is the output of export-config command. The file contains the output of show running-config of the scope where export-config was executed. The error occurs only when the file contains crypto commands, the Bash shell throws the error and hangs without returning the prompt. | 3.1(2m) | |
When the IS-IS to OSPF Multi-Site CPTEP route leaking is not programmed, inter-site BGP session might go down on the spine switch. This behavior might cause traffic drop. | 3.1(2m) | |
The incorrect information displays on the topology for the OpenStack compute node. | 3.1(2m) | |
A fault for a 100% drop rate is observed for an endpoint-to-external IP address atomic counters policy after disabling/enabling a switch. | 3.1(2m) | |
An external IP address-to-external IP address atomic counters policy does not give the results for the flow that matches the configured external IP addresses. | 3.1(2m) | |
Clicking the Submit button in a wizard will fail if any of the entered values are invalid. | 3.1(2m) | |
When installing the Hyper-V agent, MSI gives the following error: 'This application is only supported on English language version of Windows server 2012, or higher operating system.' | 3.1(2m) | |
The FEX access policy is not in Configured Access Policies under the EPG. | 3.1(2m) | |
When the fabric ID is changed in the sam.config file and the Cisco APIC is rebooted, if DHCP discovery reaches the Cisco APIC before it reaches the node identity policy, a fault for the wrong fabric gets generated. | 3.1(2m) | |
The 'fabric <node> <command>' command cannot be run from the Cisco APIC to any switch because the output shows. | 3.1(2m) | |
A service graph is in the applied state even if the VRF instance associated with the consumer EGP or consumer-facing service node EPG gets deleted. | 3.1(2m) | |
In a Multipod setup, if a node with a looseNode is moved from one pod to another, the fabricLooseNode policy associated with previous pod does not get deleted and causes a 'Specified node not present in the specified pod - fault-F2547' fault. The fault is harmless and does not depict any functional failure. | 3.1(2m) | |
When looking at the Encap Already In Use fault (F0467), debugMessage is blank. | 3.1(2m) | |
The warning messages on the creation wizards for 'Fabric - Access Policies - Switch Policies - Policies - Forwarding Scale Profile', 'Fabric - Access Policies - Switch Policies - Policy Groups - Leaf Policy groups - modify Forward Scale Profile Policy', and 'Fabric - Access Policies - Switch Policies - Profiles - Leaf Profiles' are outdated. | 3.1(2m) | |
With a 0.0.0.0/0 subnet and a specific subnet with an import route-map, the GUI shows only a 212.1.0.0/24 subnet. | 3.1(2m) | |
An inconsistent configuration involving the l3extRsPathL3OutAtt managed object with ifInstT='ext-svi' might be accepted. | 3.1(2m) | |
The svc_ifc_plgnhandler in /data/volume is not auto rotated, which causes /data/log to be almost full and raises an alert on the Cisco APIC. | 3.1(2m) | |
A remote leaf switch TEP pool is not getting deleted if the remote leaf switches are decommissioned before deleting the vPC. | 3.1(2m) | |
Connectivity from all VMs needing to go through the fabric is lost. The Hyper-V agent logs might show information indicating that it is still trying to connect to the old TEP IP address (pre-replacement) as opposed to the new one (post-replacement). | 3.1(2m) | |
On importing an exported configuration, the import will succeed and might report a warning about a failure to extract the CISCO.CloudMode.1.0.zip archive. If the CISCO.CloudMode.1.0 device package was deleted, it will not be restored on import. | 3.1(2m) | |
The IpCktEp policy might not be propagated to the leaf switch even after the policy has been configured properly on the Cisco APIC. | 3.1(2m) | |
There will be fault in the Cisco APIC for a remote leaf switch after upgrading the Cisco APIC upgrade or decommissioning and recommissioning the Cisco APIC. | 3.1(2m) | |
The acked fault count is incorrect under following circumstances: ·Fault that was already acknowledged gets cleared. ·Acknowledge only the fault instance or the associated fault delegate. | 3.1(2m) | |
There are duplicate CoPP rules in the TCAM. | 3.1(2m) | |
In the case where the user posts policies to download a specific image version and also to upgrade to that version in quick succession before waiting for the image to get downloaded, the current running catalog gets picked up for checking the compatibility. In the event where the current running catalog does not support the newly requested version of the image, the upgrade will fail with the message 'version not compatible,' even though the user would expect compatibility. | 3.1(2m) | |
Upon restoration of a configuration, if there are many (>100) interface policy groups associated to a single AEP, the full restoration takes hours. Similar problems can be seen with other mass associations on the fabric. | 3.1(2m) | |
There are inventory sync failure faults and various VMM faults for the affected VMM domain. | 3.1(2m) | |
The Cisco APIC will not allow the deletion of a remote leaf switch or POD TEP pool if there are any dhcpClient managed objects present with an IP address that is assigned from that TEP pool. | 3.1(2m) | |
The DNS policy is designed as per ctx/vrf, but when it comes to the Cisco APIC, only the default DNS profile can be used. | 3.1(2m) | |
When attempting to log into the Cisco APIC GUI, you might receive the error 'AAA Server Authentication DENIED.' You might also see the following message in a network trace when the LDAP server responds to the Cisco APIC's search query: 'In order to perform this operation a successful bind must be completed.' | 3.1(2m) | |
Shell commands cannot be executed. | 3.1(2m) | |
If many unreachable stats export destinations are configured, the observer element on a switch might dump a core and restart. | 3.1(2m) | |
The STP policy does not change or has mixed behavior after switching to another policy or reverting to the default policy. | 3.1(2m) | |
An iACL entry with a subnet mask of 0 or 32 is not allowed in the CoPP Pre-Filter creation wizard in the GUI. | 3.1(2m) | |
Under VM Networking > Microsoft > [SCVMM Domain] > Controllers > [SCVMM Controller] > DVS - apicVswitch_[name] > Portgroups > apicInfra_[name], the IP addresses display as 0.0.0.0 under the Management Network Adapters, even though the Hyper-V host has an IP address assigned through DHCP in the infra network for the VTEP interface. | 3.1(2m) | |
Fault F1313 is triggered that affects a VMNIC on an AVS-integrated hypervisor. The fault states the following error in the fault description: [API call for adding VNic failed.] | 3.1(2m) | |
When importing a configuration that includes an EPG whose bridge domain is associated with VRF 'copy' (fvRsCtx=copy), the import completes, but tn-common will be corrupted. | 3.1(2m) | |
On configuring the same encapsulation (External SVI) and different IP address subnet on different border leaf switches, the configuration gets rejected. | 3.1(2o) | |
The Cisco APIC does not send varbind timeticks in traps. | 3.1(2o) | |
QoS values are not preserved when a service graph is rendered. | 3.1(2o) | |
VMs on ESXi hosts running Cisco AVS software are unable to join the network. When running the 'vemcmd show port' command, you can see that the VM is stuck in a 'BLK' state with reason of 'WAIT EPP.' | 3.1(2o) | |
QoS values are not preserved when a service graph is rendered. | 3.1(2o) | |
Duplicate Address Detection (DAD) disables the secondary IPv6 address if the user configures a shared IPv6 address on a Layer 3 SVI. | 3.1(2o) | |
The policy element creates a stale entry for the out-of-band management next-hop. | 3.1(2o) | |
The port property is required for the snmpTrapFwdServerP class. | 3.1(2o) | |
After deleting a bridge domain and creating a new bridge domain that uses the same subnet, the Cisco APIC generates a fault on the bridge domain stating that there is already another bridge domain with the same subnet within the same VRF instance. The error is similar to the following example: Fault delegate: BD Configuration failed for [BD_NEW_dn] due to duplicate-subnets-within-ctx: [BD_OLD_dn] | 3.1(2p) | |
A crash occurs during the retrieval of vSphere tag information from VMware vCenter. | 3.1(2p) | |
There are duplicate PVLAN entries in VMware vCenter. Depending on the version of Cisco APIC code, the Cisco APIC's vmmmgr process will also crash and create a core file. | 3.1(2p) | |
VM ports do not come up after being connected or after other host/VM events, such as host/VM getting disconnected, and then reconnected. | 3.1(2p) | |
Ports are stuck in wait EPP/ACK. | 3.1(2p) | |
A fault is raised that specifies problem that occurred while retrieving tagging information for a VMM controller. Inventory pull from the VMware vCenter takes a long time (>10 minutes) and it continuously completes with a partial inventory result. The processing of events from VMware vCenter is delayed, which may result in delays for the downloading of policies to the leaf switches when EPGs are deployed on-demand at the VMM domain. This would affect connectivity for newly deployed VMs or VMs which have been vMotioned. | 3.1(2q) | |
In the 3.1 release, whenever the Cisco APIC detects an inconsistency with the port group attributes on VMware vCenter and Cisco APIC, the Cisco APIC re-pushes the port group attributes to resolve the inconsistency. This breaks vArmor integration assumptions, as prior to the 3.1 release the Cisco APIC only raised a fault whenever there was an inconsistency, but the port group attributes did not get re-pushed. | 3.1(2q) | |
An OpflexP core is seen on the leaf switch or spine switch. The leaf switch or spine switch will recover from this, and there should be no impact other than this core being generated and the the service being restarted. | 3.1(2s) | |
There is an opflexp core in stats update. The opflexp process should recover and there should be no service impact. | 3.1(2s) |
Bug ID | Description | Exists in |
The Cisco APIC does not validate duplicate IP addresses that are assigned to two device clusters. The communication to devices or the configuration of service devices might be affected. | 3.1(2m) and later | |
In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30. | 3.1(2m) and later | |
The node ID policy can be replicated from an old appliance that is decommissioned when it joins a cluster. | 3.1(2m) and later | |
The DSCP value specified on an external endpoint group does not take effect on the filter rules on the leaf switch. | 3.1(2m) and later | |
The hostname resolution of the syslog server fails on leaf and spine switches over in-band connectivity. | 3.1(2m) and later | |
Following a FEX or switch reload, configured interface tags are no longer configured correctly. | 3.1(2m) and later | |
Switches can be downgraded to a 1.0(1) version if the imported configuration consists of a firmware policy with a desired version set to 1.0(1). | 3.1(2m) and later | |
If the Cisco APIC is rebooted using the CIMC power reboot, the system enters into fsck due to a corrupted disk. | 3.1(2m) and later | |
The Cisco APIC Service (ApicVMMService) shows as stopped in the Microsoft Service Manager (services.msc in control panel > admin tools > services). This happens when a domain account does not have the correct privilege in the domain to restart the service automatically. | 3.1(2m) and later | |
The traffic destined to a shared service provider endpoint group picks an incorrect class ID (PcTag) and gets dropped. | 3.1(2m) and later | |
Traffic from an external Layer 3 network is allowed when configured as part of a vzAny (a collection of endpoint groups within a context) consumer. | 3.1(2m) and later | |
CSCuu61998 | Newly added microsegment EPG configurations must be removed before downgrading to a software release that does not support it. | 3.1(2m) and later |
Downgrading the fabric starting with the leaf switch will cause faults such as policy-deployment-failed with fault code F1371. | 3.1(2m) and later | |
If the 'Remove related objects of Graph Template” wizard is used in the Cisco APIC GUI, the Cisco APIC does not clean up objects that are in other tenants. | 3.1(2m) and later | |
The OpenStack metadata feature cannot be used with Cisco ACI integration with the Juno release (or earlier) of OpenStack due to limitations with both OpenStack and Cisco’s ML2 driver. | 3.1(2m) and later | |
Creating or deleting a fabricSetupP policy results in an inconsistent state. | 3.1(2m) and later | |
After a pod is created and nodes are added in the pod, deleting the pod results in stale entries from the pod that are active in the fabric. This occurs because the Cisco APIC uses open source DHCP, which creates some resources that the Cisco APIC cannot delete when a pod is deleted. | 3.1(2m) and later | |
When a Cisco APIC cluster is upgrading, the Cisco APIC cluster might enter the minority status if there are any connectivity issues. In this case, user logins can fail until the majority of the Cisco APICs finish the upgrade and the cluster comes out of minority. | 3.1(2m) and later | |
When downgrading to a 2.0(1) release, the spines and its interfaces must be moved from infra L3out2 to infra L3out1. After infra L3out1 comes up, delete L3out2 and its related configuration, and then downgrade to a 2.0(1) release. | 3.1(2m) and later | |
No fault gets raised upon using the same encapsulation VLAN in a copy device in tenant common, even though a fault should get raised. | 3.1(2m) and later | |
In the leaf mode, the command 'template route group <group-name> tenant <tenant-name>' fails, declaring that the tenant passed is invalid. | 3.1(2m) and later | |
When First hop security is enabled on a bridge domain, traffic is disrupted. | 3.1(2m) and later | |
Cisco ACI Multi-Site Orchestrator BGP peers are down and a fault is raised for a conflicting rtrId on the fvRtdEpP managed object during L3extOut configuration. | 3.1(2m) and later | |
The PSU SPROM details might not be shown in the CLI upon removal and insertion from the switch. | 3.1(2m) and later |