广告位联系
返回顶部
分享到

解决java启动时报线程占用报错:Exception in thread “Thread-14“ java.net.Bi

java 来源:互联网 作者:佚名 发布时间:2024-01-15 21:48:08 人浏览
摘要

本文提供三种不同的解决方式,也是三种不同的情况和思路 我的问题是在springboot整合了xxl-job一段时间后出现的。如果你程序里集成了xxl-job或者有需要配置其它端口的地方,这篇文章或

本文提供三种不同的解决方式,也是三种不同的情况和思路

我的问题是在springboot整合了xxl-job一段时间后出现的。如果你程序里集成了xxl-job或者有需要配置其它端口的地方,这篇文章或许可以给你带来启发或者解决你的问题。

1 异常

启动项目后抛出异常,但是奇怪的是执行器在任务调度中心中注册成功,也能成功执行

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

  .   ____          _            __ _ _

 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

  '  |____| .__|_| |_|_| |_\__, | / / / /

 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::        (v2.2.2.RELEASE)

 

2023-02-14 11:22:15.516  INFO 4436 --- [           main] com.jxj.SafetyWebserverApplication       : Starting SafetyWebserverApplication on abc with PID 4436 (C:\project\safetyproduction_collectdata\target\classes started by whx in C:\project\safetyproduction_collectdata)

2023-02-14 11:22:15.521  INFO 4436 --- [           main] com.jxj.SafetyWebserverApplication       : No active profile set, falling back to default profiles: default

2023-02-14 11:22:16.722  INFO 4436 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode!

....

 

2023-02-14 11:22:19.191  INFO 4436 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService 'taskScheduler'

2023-02-14 11:22:19.394  INFO 4436 --- [           main] com.jxj.config.XxlJobConfig              : >>>>>>>>>>> xxl-job config init.

2023-02-14 11:22:19.666  INFO 4436 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'

2023-02-14 11:22:20.271  INFO 4436 --- [           main] c.xxl.job.core.executor.XxlJobExecutor   : >>>>>>>>>>> xxl-job register jobhandler success, name:demoJobHandler, jobHandler:com.xxl.job.core.handler.impl.MethodJobHandler@c6bf8d9[class com.jxj.task.WarningTask#demoJobHandler]

2023-02-14 11:22:20.585  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel

2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'application.errorChannel' has 1 subscriber(s).

2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'

2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttPublisherConfig.mqttOutbound.serviceActivator} as a subscriber to the 'mqttOutboundChannel' channel

2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttOutboundChannel' has 1 subscriber(s).

2023-02-14 11:22:20.600  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttPublisherConfig.mqttOutbound.serviceActivator'

2023-02-14 11:22:20.600  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttSenderConfig.mqttOutbound.serviceActivator} as a subscriber to the 'mqttOutboundChannel1' channel

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttOutboundChannel1' has 1 subscriber(s).

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttSenderConfig.mqttOutbound.serviceActivator'

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttSubscriberConfig.handler.serviceActivator} as a subscriber to the 'mqttInboundChannel' channel

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttInboundChannel' has 1 subscriber(s).

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttSubscriberConfig.handler.serviceActivator'

2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'

2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'

2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'

2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.gateway.GatewayProxyFactoryBean    : started bean 'mqttGateway'

Exception in thread "Thread-17" java.net.BindException: Address already in use: bind

    at sun.nio.ch.Net.bind0(Native Method)

    at sun.nio.ch.Net.bind(Net.java:433)

    at sun.nio.ch.Net.bind(Net.java:425)

    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)

    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)

    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)

    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)

    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)

    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)

    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)

    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)

    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)

    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)

    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)

    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

    at java.lang.Thread.run(Thread.java:748)

2023-02-14 11:22:21.444  INFO 4436 --- [           main] .m.i.MqttPahoMessageDrivenChannelAdapter : started bean 'inbound'; defined in: 'class path resource [com/jxj/config/MqttSubscriberConfig.class]'; from source: 'org.springframework.core.type.classreading.SimpleMethodMetadata@1d4664d7'

2023-02-14 11:22:21.446  INFO 4436 --- [           main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]

2023-02-14 11:22:25.537  INFO 4436 --- [           main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect

2023-02-14 11:22:25.545  INFO 4436 --- [ntContainer#0-1] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]

2 问题定位

2.1 第一种情况

网上有的说通过

在低版本的 xxl-job 中, 初始化XxlJobSpringExecutor执行器需要在@Bean中加上 initMethod = "start", destroyMethod = "destroy",但是在高版本的 xxl-job(如 2.1.2)则需要删除 initMethod = "start", destroyMethod = "destroy"

而我的问题不是在bean上加(initMethod = “start”, destroyMethod = “destroy”),我加上之后会报两遍线程被使用的异常。

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

Exception in thread "Thread-14" java.net.BindException: Address already in use: bind

    at sun.nio.ch.Net.bind0(Native Method)

    at sun.nio.ch.Net.bind(Net.java:433)

    at sun.nio.ch.Net.bind(Net.java:425)

    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)

    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)

    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)

    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)

    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)

    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)

    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)

    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)

    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)

    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)

    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)

    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

    at java.lang.Thread.run(Thread.java:748)

2023-02-14 11:25:20.568  INFO 18140 --- [           main] c.xxl.job.core.executor.XxlJobExecutor   : >>>>>>>>>>> xxl-job register jobhandler success, name:demoJobHandler, jobHandler:com.xxl.job.core.handler.impl.MethodJobHandler@1618c98a[class com.jxj.task.WarningTask#demoJobHandler]

Exception in thread "Thread-20" java.net.BindException: Address already in use: bind

    at sun.nio.ch.Net.bind0(Native Method)

    at sun.nio.ch.Net.bind(Net.java:433)

    at sun.nio.ch.Net.bind(Net.java:425)

    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)

    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)

    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)

    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)

    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)

    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)

    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)

    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)

    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)

    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)

    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)

    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)

    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

    at java.lang.Thread.run(Thread.java:748)

1

2

3

4

5

6

7

8

9

10

11

//原注解:

@Bean

public XxlJobSpringExecutor xxlJobExecutor() {

    ...

    }

     

//修改后注解

@Bean(initMethod = "start", destroyMethod = "destroy")

public XxlJobSpringExecutor xxlJobExecutor() {

    ...

    }

2.2 第二种情况

后来我发现我的问题是 本地和线上的程序连接了相同的xxl-job,并且连接xxl-job的端口是一样的,导致了这个问题!

1

2

3

4

5

6

7

8

9

10

11

12

13

xxl:

  job:

    accessToken: xxx

    admin:

#       addresses: http://1xxxxx/xxl-job-admin

      addresses: http://39xxxxx/xxl-job-admin

    executor:

      address: 'xxx'

      appname: safexxxxxst #这个名字要和页面配置的一致

      ip: ''

      logpath: /datxxxxjob/joxxxler

      logretentiondays: 30

      port: 9996

就是这个 port端口重复导致的问题修改一下即可

port: 9996

2.3 第三种情况

本来我是第二种情况已经解决了,结果下午又报这个错了,因此有了第三种情况的解决。

启动报错时的 错误关键日志:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

2023-02-14 13:59:12.074  INFO 8364 --- [           main] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#131ba005:0/SimpleConnection@5981f2c6 [delegate=amqp://root@127.0.0.1:5673/, localPort= 5415]

Exception in thread "Thread-21" java.net.BindException: Address already in use: bind

    at sun.nio.ch.Net.bind0(Native Method)

    at sun.nio.ch.Net.bind(Net.java:433)

    at sun.nio.ch.Net.bind(Net.java:425)

    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)

    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)

    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)

    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)

    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)

    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)

    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)

    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)

    at

由于我已经解决过一次,所以我对端口比较敏感(大家看完后面的分析就可以知道我为什么敏感),就在yaml文件种搜了一下所有的 port 一共有五处。

排除项目的port(项目的接口冲突会直接报错,停止运行)

排除xxl-job (第二种情况冲突已经解决)

剩下的是集成的 redis,elasticsearch,rabbitmq

redis和elasticsearch我都没有开启,问题就只在rabbitmq了。

rabbitmq我是本地起的docker,来连接测试的。

然后我有认真看了一下日志,就是上面贴出来的第一行

[delegate=amqp://root@127.0.0.1:5673/, localPort= 5415]

也就是打印完这个日志后报的错误,localPort= 5415,于是我又在本地查看了一下这个5415端口使用情况

netstat -aon|findstr 5415

果然 两个不同的线程在用!

1

2

3

C:\Uxxxs\1xx0>netstat -aon|findstr 5415

  TCP    127.0.0.1:5415         127.0.0.1:5673         ESTABLISHED     8364

  TCP    127.0.0.1:5673         127.0.0.1:5415         ESTABLISHED     4136

然后我打开任务管理器 详细信息,找到4136是daocker

我重启了一下电脑,,解决了

3 问题原因

程序启动之后重新启动了一个线程去连接xxl-job的端口,但是这个端口已经被占用了,所以程序就直接返回了一个这个线程被占用了。

4 思考学习

服务创建监听的时候,如果端口有LISTENING、ESTABLISHED、TIME_WAIT等,好像都会报错。 可以研究下原理

TCP状态转移要点

TCP协议规定,对于已经建立的连接,网络双方要进行四次握手才能成功断开连接,如果缺少了其中某个步骤,将会使连接处于假死状态,连接本身占用的资源不会被释放。网络服务器程序要同时管理大量连接,所以很有必要保证无用连接完全断开,否则大量僵死的连接会浪费许多服务器资源。在众多TCP状态中,最值得注意的状态有两个:CLOSE_WAIT和TIME_WAIT。

1、LISTENING状态

  FTP服务启动后首先处于侦听(LISTENING)状态。

2、ESTABLISHED状态

  ESTABLISHED的意思是建立连接。表示两台机器正在通信。

3、TIME_WAIT

我方主动调用close()断开连接,收到对方确认后状态变为TIME_WAIT。TCP协议规定TIME_WAIT状态会一直持续2MSL(即两倍的分段最大生存期),以此来确保旧的连接状态不会对新连接产生影响。处于TIME_WAIT状态的连接占用的资源不会被内核释放,所以作为服务器,在可能的情况下,尽量不要主动断开连接,以减少TIME_WAIT状态造成的资源浪费。

目前有一种避免TIME_WAIT资源浪费的方法,就是关闭socket的LINGER选项。但这种做法是TCP协议不推荐使用的,在某些情况下这个操作可能会带来错误。


版权声明 : 本文内容来源于互联网或用户自行发布贡献,该文观点仅代表原作者本人。本站仅提供信息存储空间服务和不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权, 违法违规的内容, 请发送邮件至2530232025#qq.cn(#换@)举报,一经查实,本站将立刻删除。
原文链接 :
相关文章
  • 本站所有内容来源于互联网或用户自行发布,本站仅提供信息存储空间服务,不拥有版权,不承担法律责任。如有侵犯您的权益,请您联系站长处理!
  • Copyright © 2017-2022 F11.CN All Rights Reserved. F11站长开发者网 版权所有 | 苏ICP备2022031554号-1 | 51LA统计