EAP670 PD Over Current indication
I have discovered at least one reason for my EAP670v1 reboot - they're being reset by the switch for drawing too much power. Now, I had seen this before and have tried multiple adjustments of the PoE settings. It should be noted that I have many, many of the PoE switched in question and they work fine for all other PoE devices other than these, including all of the other PoE devices on this current switch. I have allocated an eye-popping 25w of power for these 670s, and they still continue to generate the following error and get reset by the switch W 05/08/24 21:04:29 00562 ports: port 14 PD Over Current indication.
this feels like a fairly serious bug, but all of my data is anecdotal: switches of same type have no poe issues, all other poe devices on same device work fine, power availability is within spec.
I am still on 1.0.13, and have seen this issue on older versions as well. Upgrading to 1.0.14 is a non-starter because all EAP670s consistently disconnect from the controller (already a ticket open for that, it is very reproducable).
switch configuration:
interface 14
name "Core: wap-ax-1"
power-over-ethernet critical
poe-allocate-by value
poe-value 25
poe-lldp-detect enabled
tagged vlan 4,6-7,9
untagged vlan 809
exit
core# show power-over-ethernet brief
Status and Configuration Information
Available: 370 W Used: 101 W Remaining: 269 W
PoE Pwr Pwr Pre-std Alloc Alloc PSE Pwr PD Pwr PoE Port PLC PLC
Port Enab Priority Detect Cfg Actual Rsrvd Draw Status Cls Type
------ ---- -------- ------- ----- ------ ------- ------- ------------ --- ----
...
14 Yes critical on value value 29.3 W 8.5 W Delivering 4 2
Has anyone else seen this? I've got an older Juniper PoE switch I can test with as well, but I don't think that's really the issue and I'd rather not power that power hungry beast back on. I also have a spare Aruba switch I can try, but again, I don't see that as the issue.
- Copy Link
- Subscribe
- Bookmark
- Report Inappropriate Content
@ndb217 Again this morning I have seen the same over-current error:
I 05/21/24 05:30:55 00561 ports: port 14 Applying Power to PD.
I 05/21/24 05:30:55 00560 ports: port 14 PD Detected.
W 05/21/24 05:30:53 00562 ports: port 14 PD Over Current indication.
Time stamp is correct, this was at 0530, no one was using the system other than always-on devices, of which there aren't many. Only one of the APs experienced this. No configurations have been changed, no other PoE devices had the issue.
- Copy Link
- Report Inappropriate Content
Hi @ndb217
Could you please try to exchange another EAP especially the EAP670 to this location and see whether you encounter the same issue? If only this certain EAP670 has the issue, we would like to suggest you get the replacement. You can inform us when you make the test and need to get the replacement.
- Copy Link
- Report Inappropriate Content
@Hank21 It is definitely not just this one AP. My other does it as well, just not as frequently.
- Copy Link
- Report Inappropriate Content
Hi @ndb217
Could you help to confirm the model number of your PoE switch? We have confirmed that the EAP670 supports the standard PoE and it should be compatible with the standard Power Sourcing Equipment.
- Copy Link
- Report Inappropriate Content
Are you using solid copper cables or CCA?
Are the cable ends newly terminated or have they been 'around for a while'?
If you power the AP with a short patch cord directly from the switch, does the overcurrent still happen?
- Copy Link
- Report Inappropriate Content
@d0ugmac1 Structured cabling is solid core into keystone jacks, all tested end to end. Patch cables are commercially pre-terminated. I have also tested with a fluke tester from patch cable to patch cable.
Additional data point: all other 802.11af/at PoE devices use the exact same infrastructure without issue, including a fairly high draw cellular antenna / router that's operated for a couple of years as an out of band connection.
I have a 48v TP-Link PoE injector that I can test with as well.
- Copy Link
- Report Inappropriate Content
- Copy Link
- Report Inappropriate Content
ndb217 wrote
Hi @ndb217
We have tested and generally the power consumption of the 670 v1 is only around 17W. The 25W power supplied by the switch should be sufficient. And we have tested with other switches and it worked fine though we don't have the Aruba switch you mentioned. We even tried reducing the power to 7W which caused the EAP turned to standby mode, and it did not cause any current overload or restart.
The issue might be related to the compatibility issue between Aruba switch and EAP, or the switch itself might be faulty.
You may try the following suggestions:
1. You can try to compare with other PoE devices, and inform us the model of the other PoE devices especially the TP-Link EAP that are working properly with the switch. This will help our development team to further analyze.
2. You can try to increase the power supply to the 670 v1 and see whether it improves.
3. We can also try connecting the 670 v1 to other ports that are known to work well with other PoE devices. This will help us determine if the issue is specific to the port or the device itself.
4. If none of the above solutions work, the only option left to resolve the issue would be to use an injector for powering the 670 v1.
Thanks for your patience.
- Copy Link
- Report Inappropriate Content
The power to the devices is set to both "critical" and 25W reserved, this has been the case for a few months since I first noticed the behavior. There are 12 other PoE devices that do *not* exhibit this behavior:
core# show power-over-ethernet brief | i Delivering
Status and Configuration Information
Available: 370 W Used: 103 W Remaining: 267 W
PoE Pwr Pwr Pre-std Alloc Alloc PSE Pwr PD Pwr PoE Port PLC PLC
Port Enab Priority Detect Cfg Actual Rsrvd Draw Status Cls Type
------ ---- -------- ------- ----- ------ ------- ------- ------------ --- ----
5 Yes low on usage usage 4.6 W 4.4 W Delivering 4 2
6 Yes critical on value value 29.3 W 8.0 W Delivering 4 2
7 Yes low on usage usage 2.7 W 2.7 W Delivering 4 2
13 Yes low on usage usage 5.4 W 5.1 W Delivering 4 2
14 Yes critical on value value 29.3 W 9.6 W Delivering 4 2
16 Yes low on usage usage 4.3 W 4.2 W Delivering 4 2
17 Yes low on usage usage 3.2 W 3.1 W Delivering 4 2
21 Yes low on usage usage 2.6 W 2.6 W Delivering 4 2
23 Yes low on usage usage 4.8 W 4.6 W Delivering 0 1
24 Yes low on usage usage 4.6 W 4.5 W Delivering 4 2
25 Yes low on usage usage 3.0 W 2.9 W Delivering 4 2
28 Yes low on usage usage 2.3 W 2.3 W Delivering 4 2
33 Yes low on usage usage 3.4 W 3.3 W Delivering 0 1
39 Yes low on usage usage 3.3 W 3.2 W Delivering 4 2
Fairly light PoE load:
core# show power-over-ethernet
Status and Counters - System Power Status
Chassis power-over-ethernet:
Total Available Power : 370 W
Total Power Drawn : 59 W +/- 6W
Total Power Reserved : 102 W
Total Remaining Power : 268 W
Internal Power
Main Power
PS (Watts) Status
----- ------------- ---------------------
1 370 POE+ Connected
Additionally, poe-lldp-detect enabled is set on the ports and LLDP is enabled on the APs. I really, really do not think this is a problem with the switch PoE, I have many of these in production working with other AP vendors and a wide variety of PoE devices and the 670s are the only devices that have any issue whatsoever. I have also moved ports on the switch to known working and it has not resolved any power issues have upgraded the switch firmware, and re-tested the cabling (structured and patch).
Given the other behaviors these things exhibit and fluxuate to and from between firmware loads, I am not confident that they are not problematic. This is the only installation of these I have, the other locations use different APs with everything else the same other than the configured power values.
- Copy Link
- Report Inappropriate Content
Is the PoE+ implementation on these 670s standards compliant? I used to run a network with 150k edge ports of HP / Aruba and never had an issue with PoE, so I am fairly comfortable with the reliability of this HP Aruba switchin platform, and the level of completeness of their standards compliance.
I have upgraded this switch to WC.16.11.0016 which was released in January of this year. The only PoE item mentioned in the release notes is that it supports PoE in 2-pair mode, and that came in release .13 which shouldn't matter because this switch only supports 802.11af and 802.11at (type 1 and type 2 PoE+) and are only 2-pair, if I recall.
My next question, is there any mechanism for viewing logs that may be related to this behavior.
I will try moving one of these to a completely different, tested, and known working cable run next to completely rule that out. Like I stated a few times - these are the only devices that have any issue whatsoever and given the other fairly thorny issues I have had with them, my suspicion is that it is an issue with the 670 v1s, and not the switch or cabling that works with literally everything else.
More data points:
core# show power-over-ethernet ethernet 14
Status and Configuration Information for port 14
Power Enable : Yes PoE Port Status : Delivering
PLC Class/Type : 4/2 Priority Config : critical
DLC Class/Type : 0/- Pre-std Detect : on
Alloc By Config : value Configured Type :
Alloc By Actual : value PoE Value Config : 25
PoE Counter Information
Power Denied Cnt : 0 Short Cnt : 0
LLDP Information
PSE Allocated Power Value : 25.0 W PSE TLV Configured : dot3, MED
PD Requested Power Value : 0.0 W PSE TLV Sent Type : dot3
MED LLDP Detect : Enabled PD TLV Sent Type : n/a
Power Information
PSE Voltage : 53.5 V PSE Reserved Power : 29.3 W
PD Amperage Draw : 162 mA PD Power Draw : 8.7 W
Refer to command's help option for field definitions
core# show power-over-ethernet ethernet 6
Status and Configuration Information for port 6
Power Enable : Yes PoE Port Status : Delivering
PLC Class/Type : 4/2 Priority Config : critical
DLC Class/Type : 0/- Pre-std Detect : on
Alloc By Config : value Configured Type :
Alloc By Actual : value PoE Value Config : 25
PoE Counter Information
Over Current Cnt : 0 MPS Absent Cnt : 0
Power Denied Cnt : 0 Short Cnt : 0
LLDP Information
PSE Allocated Power Value : 25.0 W PSE TLV Configured : dot3, MED
PD Requested Power Value : 0.0 W PSE TLV Sent Type : dot3
MED LLDP Detect : Enabled PD TLV Sent Type : n/a
Power Information
PSE Voltage : 54.8 V PSE Reserved Power : 29.3 W
PD Amperage Draw : 170 mA PD Power Draw : 9.4 W
Refer to command's help option for field definitions
The devices do not seen to be using LLDP-MED to request a power value, but they do support some LLDP.
If I can't get this sorted to be in line with my normal deployment architecture I will have to just replace them and get something different
- Copy Link
- Report Inappropriate Content
Information
Helpful: 0
Views: 2036
Replies: 22
Voters 0
No one has voted for it yet.